kse-01/data.csv

5.4 MiB

1namefilelinetypecomment
20UserInputErrortensorflow/configure.py74class
31is_windowstensorflow/configure.py78function
42is_linuxtensorflow/configure.py82function
53is_macostensorflow/configure.py86function
64is_ppc64letensorflow/configure.py90function
75is_cygwintensorflow/configure.py94function
86get_inputtensorflow/configure.py98function
97symlink_forcetensorflow/configure.py109functionForce symlink, equivalent of 'ln -sf'. Args: target: items to link to. link_name: name of the link.
108sed_in_placetensorflow/configure.py126functionReplace old string with new string in file. Args: filename: string for filename. old: string to replace. new: new string to replace to.
119write_to_bazelrctensorflow/configure.py141function
1210write_action_env_to_bazelrctensorflow/configure.py146function
1311run_shelltensorflow/configure.py150function
1412cygpathtensorflow/configure.py163functionConvert path from posix to windows.
1513get_python_pathtensorflow/configure.py168functionGet the python site package paths.
1614get_python_major_versiontensorflow/configure.py198functionGet the python major version.
1715setup_pythontensorflow/configure.py203functionSetup python related env variables.
1816reset_tf_configure_bazelrctensorflow/configure.py273functionReset file that contains customized config settings.
1917cleanup_makefiletensorflow/configure.py278functionDelete any leftover BUILD files from the Makefile build. These files could interfere with Bazel parsing.
2018get_vartensorflow/configure.py292functionGet boolean input from user. If var_name is not set in env, ask user to enable query_item or not. If the response is empty, use the default. Args: environ_cp: copy of the os.environ. var_name: string for name of environment variable, e.g. "TF_NEED_CUDA". query_item: string for feature related to the variable, e.g. "CUDA for Nvidia GPUs". enabled_by_default: boolean for default behavior. question: optional string for how to ask for user input. yes_reply: optional string for reply when feature is enabled. no_reply: optional string for reply when feature is disabled. Returns: boolean value of the variable. Raises: UserInputError: if an environment variable is set, but it cannot be interpreted as a boolean indicator, assume that the user has made a scripting error, and will continue to provide invalid input. Raise the error to avoid infinitely looping.
2119set_build_vartensorflow/configure.py377functionSet if query_item will be enabled for the build. Ask user if query_item will be enabled. Default is used if no input is given. Set subprocess environment variable and write to .bazelrc if enabled. Args: environ_cp: copy of the os.environ. var_name: string for name of environment variable, e.g. "TF_NEED_CUDA". query_item: string for feature related to the variable, e.g. "CUDA for Nvidia GPUs". option_name: string for option to define in .bazelrc. enabled_by_default: boolean for default behavior. bazel_config_name: Name for Bazel --config argument to enable build feature.
2220set_action_env_vartensorflow/configure.py411functionSet boolean action_env variable. Ask user if query_item will be enabled. Default is used if no input is given. Set environment variable and write to .bazelrc. Args: environ_cp: copy of the os.environ. var_name: string for name of environment variable, e.g. "TF_NEED_CUDA". query_item: string for feature related to the variable, e.g. "CUDA for Nvidia GPUs". enabled_by_default: boolean for default behavior. question: optional string for how to ask for user input. yes_reply: optional string for reply when feature is enabled. no_reply: optional string for reply when feature is disabled. bazel_config_name: adding config to .bazelrc instead of action_env.
2321convert_version_to_inttensorflow/configure.py446functionConvert a version number to a integer that can be used to compare. Version strings of the form X.YZ and X.Y.Z-xxxxx are supported. The 'xxxxx' part, for instance 'homebrew' on OS/X, is ignored. Args: version: a version to be converted Returns: An integer if converted successfully, otherwise return None.
2422check_bazel_versiontensorflow/configure.py471functionCheck installed bazel version is between min_version and max_version. Args: min_version: string for minimum bazel version (must exist!). max_version: string for maximum bazel version (must exist!). Returns: The bazel version detected.
2523set_cc_opt_flagstensorflow/configure.py518functionSet up architecture-dependent optimization flags. Also append CC optimization flags to bazel.rc.. Args: environ_cp: copy of the os.environ.
2624set_tf_cuda_clangtensorflow/configure.py546functionset TF_CUDA_CLANG action_env. Args: environ_cp: copy of the os.environ.
2725set_tf_download_clangtensorflow/configure.py566functionSet TF_DOWNLOAD_CLANG action_env.
2826get_from_env_or_user_or_defaulttensorflow/configure.py582functionGet var_name either from env, or user or default. If var_name has been set as environment variable, use the preset value, else ask for user input. If no input is provided, the default is used. Args: environ_cp: copy of the os.environ. var_name: string for name of environment variable, e.g. "TF_NEED_CUDA". ask_for_var: string for how to ask for user input. var_default: default value string. Returns: string value for var_name
2927set_clang_cuda_compiler_pathtensorflow/configure.py607functionSet CLANG_CUDA_COMPILER_PATH.
3028prompt_loop_or_load_from_envtensorflow/configure.py630functionLoop over user prompts for an ENV param until receiving a valid response. For the env param var_name, read from the environment or verify user input until receiving valid input. When done, set var_name in the environ_cp to its new value. Args: environ_cp: (Dict) copy of the os.environ. var_name: (String) string for name of environment variable, e.g. "TF_MYVAR". var_default: (String) default value string. ask_for_var: (String) string for how to ask for user input. check_success: (Function) function that takes one argument and returns a boolean. Should return True if the value provided is considered valid. May contain a complex error message if error_msg does not provide enough information. In that case, set suppress_default_error to True. error_msg: (String) String with one and only one '%s'. Formatted with each invalid response upon check_success(input) failure. suppress_default_error: (Bool) Suppress the above error message in favor of one from the check_success function. resolve_symlinks: (Bool) Translate symbolic links into the real filepath. n_ask_attempts: (Integer) Number of times to query for valid input before raising an error and quitting. Returns: [String] The value of var_name after querying for input. Raises: UserInputError: if a query has been attempted n_ask_attempts times without success, assume that the user has made a scripting error, and will continue to provide invalid input. Raise the error to avoid infinitely looping.
3129create_android_ndk_ruletensorflow/configure.py696functionSet ANDROID_NDK_HOME and write Android NDK WORKSPACE rule.
3230create_android_sdk_ruletensorflow/configure.py724functionSet Android variables and write Android SDK WORKSPACE rule.
3331get_ndk_api_leveltensorflow/configure.py788functionGets the appropriate NDK API level to use for the provided Android NDK path.
3432set_gcc_host_compiler_pathtensorflow/configure.py836functionSet GCC_HOST_COMPILER_PATH.
3533reformat_version_sequencetensorflow/configure.py858functionReformat the version string to have the given number of sequences. For example: Given (7, 2) -> 7.0 (7.0.1, 2) -> 7.0 (5, 1) -> 5 (5.0.3.2, 1) -> 5 Args: version_str: String, the version string. sequence_count: int, an integer. Returns: string, reformatted version string.
3634set_tf_cuda_pathstensorflow/configure.py881functionSet TF_CUDA_PATHS.
3735set_tf_cuda_versiontensorflow/configure.py892functionSet TF_CUDA_VERSION.
3836set_tf_cudnn_versiontensorflow/configure.py904functionSet TF_CUDNN_VERSION.
3937is_cuda_compatibletensorflow/configure.py916functionCheck compatibility between given library and cudnn/cudart libraries.
4038set_tf_tensorrt_versiontensorflow/configure.py945functionSet TF_TENSORRT_VERSION.
4139set_tf_nccl_versiontensorflow/configure.py962functionSet TF_NCCL_VERSION.
4240get_native_cuda_compute_capabilitiestensorflow/configure.py979functionGet native cuda compute capabilities. Args: environ_cp: copy of the os.environ. Returns: string of native cuda compute capabilities, separated by comma.
4341set_tf_cuda_compute_capabilitiestensorflow/configure.py1003functionSet TF_CUDA_COMPUTE_CAPABILITIES.
4442set_other_cuda_varstensorflow/configure.py1074functionSet other CUDA related variables.
4543set_host_cxx_compilertensorflow/configure.py1083functionSet HOST_CXX_COMPILER.
4644set_host_c_compilertensorflow/configure.py1100functionSet HOST_C_COMPILER.
4745set_computecpp_toolkit_pathtensorflow/configure.py1117functionSet COMPUTECPP_TOOLKIT_PATH.
4846set_trisycl_include_dirtensorflow/configure.py1149functionSet TRISYCL_INCLUDE_DIR.
4947system_specific_test_configtensorflow/configure.py1173functionAdd default build and test flags required for TF tests to bazelrc.
5048set_system_libs_flagtensorflow/configure.py1216function
5149is_reduced_optimize_huge_functions_availabletensorflow/configure.py1233functionCheck to see if the system supports /d2ReducedOptimizeHugeFunctions. The above compiler flag is a new compiler flag introduced to the Visual Studio compiler in version 16.4 (available in Visual Studio 2019, Preview edition only, as of 2019-11-19). TensorFlow needs this flag to massively reduce compile times, but until 16.4 is officially released, we can't depend on it. See also https://groups.google.com/a/tensorflow.org/d/topic/build/SsW98Eo7l3o/discussion Because it's very annoying to check this manually (to check the MSVC installed versions, you need to use the registry, and it's not clear if Bazel will be using that install version anyway), we expect enviroments who know they may use this flag to export TF_VC_VERSION=16.4 TODO(angerson, gunan): Remove this function when TensorFlow's minimum VS version is upgraded to 16.4. Arguments: environ_cp: Environment of the current execution Returns: boolean, whether or not /d2ReducedOptimizeHugeFunctions is available on this machine.
5250set_windows_build_flagstensorflow/configure.py1262functionSet Windows specific build options.
5351config_info_linetensorflow/configure.py1283functionHelper function to print formatted help text for Bazel config options.
5452configure_iostensorflow/configure.py1288functionConfigures TensorFlow for iOS builds. This function will only be executed if `is_macos()` is true.
5553validate_cuda_configtensorflow/configure.py1305functionRun find_cuda_config.py and return cuda_toolkit_path, or None.
5654maintensorflow/configure.py1365function
5755_running_from_pip_packagetensorflow/tensorflow/api_template.__init__.py132function
5856_running_from_pip_packagetensorflow/tensorflow/api_template_v1.__init__.py142function
5957_LazyLoadertensorflow/tensorflow/virtual_root_template_v1.__init__.py33classLazily import a module so that we can forward it.
6058_forward_moduletensorflow/tensorflow/virtual_root_template_v1.__init__.py63function
6159_LazyLoadertensorflow/tensorflow/virtual_root_template_v2.__init__.py33classLazily import a module so that we can forward it.
6260_forward_moduletensorflow/tensorflow/virtual_root_template_v2.__init__.py63function
6361VarsAndArithmeticObjectGraphtensorflow/tensorflow/cc/saved_model/testdata/generate_saved_models.py37classThree vars (one in a sub-module) and compute method.
6462ReferencesParenttensorflow/tensorflow/cc/saved_model/testdata/generate_saved_models.py55class
6563CyclicModuletensorflow/tensorflow/cc/saved_model/testdata/generate_saved_models.py64class
6664maintensorflow/tensorflow/cc/saved_model/testdata/generate_saved_models.py77function
6765tfaddtensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py48function
6866tfadd_with_ckpttensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py54function
6967tfadd_with_ckpt_savertensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py69function
7068tfassert_eqtensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py88function
7169tfcondtensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py96function
7270tfgathertensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py104function
7371tfmatmultensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py110function
7472tfmatmulandaddtensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py116function
7573tffunctiontensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py124function
7674tfsplitstensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py135functionA more complex graph, including splits.
7775tftop_ktensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py152function
7876tfvariable_readonlytensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py158function
7977tfvariabletensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py169function
8078tfvariable_sequential_updatestensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py177function
8179export_debug_infotensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py189functionExports debug information from a graph. Args: exported_graph: A Graph that has been created by tracing a saveable view. Returns: Corresponding GraphDebugInfo with traces for all ops in exported_graph.
8280write_graphtensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py204functionBuild a graph using build_graph and write it out.
8381maintensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py223function
8482_XlaClusterOutputGradtensorflow/tensorflow/compiler/jit/ops/xla_ops_grad.py25function
8583TestGraphDebugInfotensorflow/tensorflow/compiler/mlir/lite/tests/debuginfo/concrete_function_error.py32classTest stack trace can be displayed.
8684maintensorflow/tensorflow/compiler/mlir/lite/tests/debuginfo/concrete_function_error.py64function
8785TestModuletensorflow/tensorflow/compiler/mlir/lite/tests/debuginfo/saved_model_error.py32classThe test model has unsupported op.
8886TestGraphDebugInfotensorflow/tensorflow/compiler/mlir/lite/tests/debuginfo/saved_model_error.py41classTest stack trace can be displayed.
8987maintensorflow/tensorflow/compiler/mlir/lite/tests/debuginfo/saved_model_error.py73functiontest driver method writes the error message to stdout.
9088TestModuletensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/basic.py38class
9189Testtensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/basic_v1.py49function
9290TestModuletensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/call_to_exported.py27class
9391do_testtensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/common.py43functionRuns test. 1. Performs absl and tf "main"-like initialization that must run before almost anything else. 2. Converts `tf.Module` to SavedModel 3. Converts SavedModel to MLIR 4. Prints the textual MLIR to stdout (it is expected that the caller will have FileCheck checks in its file to check this output). This is only for use by the MLIR SavedModel importer tests. Args: create_module_fn: A callable taking no arguments, which returns the `tf.Module` to be converted and printed. exported_names: A set of exported names for the MLIR converter (default is "export all"). show_debug_info: If true, shows debug locations in the resulting MLIR.
9492set_tf_optionstensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/common_v1.py38function
9593do_testtensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/common_v1.py49functionRuns test. 1. Performs absl and tf "main"-like initialization that must run before almost anything else. 2. Converts signature_def_map to SavedModel V1 3. Converts SavedModel V1 to MLIR 4. Prints the textual MLIR to stdout (it is expected that the caller will have FileCheck checks in its file to check this output). This is only for use by the MLIR SavedModel importer tests. Args: create_signature: A functor that return signature_def_map, init_op and assets_collection. signature_def_map is a map from string key to signature_def. The key will be used as function name in the resulting MLIR. canonicalize: If true, canonicalizer will be run on the resulting MLIR. show_debug_info: If true, shows debug locations in the resulting MLIR.
9694Testtensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/control_flow_duplicate_v1.py42function
9795Testtensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/control_flow_upgrade_legacy_v1.py34function
9896ReferencesParenttensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/cyclic_object_graph.py27class
9997TestModuletensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/cyclic_object_graph.py38class
10098Childtensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/dag_object_graph.py27class
10199TestModuletensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/dag_object_graph.py37class
102100TestModuletensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/debug_info.py27class
103101plustensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/defun_export.py29function
104102test_defuntensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/defun_export.py33function
105103Testtensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/duplicate_method_names_v1.py37function
106104TestModuletensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/exported_python_args.py27class
107105write_vocabulary_filetensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/hash_table_asset_v1.py39functionWrite temporary vocab file for module construction.
108106testtensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/hash_table_asset_v1.py49function
109107Testtensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/hash_table_v1.py60function
110108mnist_modeltensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/keras.py27functionCreates a MNIST model.
111109TestModuletensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/keras.py36class
112110Testtensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/multi_arguments_results_v1.py52function
113111Testtensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/multi_variables_v1.py39function
114112TestModuletensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/partially_shaped_variables.py27class
115113Testtensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/remove_init_variable_v1.py50function
116114TestModuletensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/shapes_for_arguments.py27class
117115Testtensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/shared_variable_v1.py41function
118116TestModuletensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/structured_input.py27class
119117AdadeltaOptimizerTesttensorflow/tensorflow/compiler/tests/adadelta_test.py31class
120118AdagradDAOptimizerTesttensorflow/tensorflow/compiler/tests/adagrad_da_test.py32class
121119AdagradOptimizerTesttensorflow/tensorflow/compiler/tests/adagrad_test.py31class
122120adam_update_numpytensorflow/tensorflow/compiler/tests/adam_test.py34function
123121AdamOptimizerTesttensorflow/tensorflow/compiler/tests/adam_test.py52class
124122XlaAddNTesttensorflow/tensorflow/compiler/tests/add_n_test.py30class
125123ArgMinMaxTesttensorflow/tensorflow/compiler/tests/argminmax_test.py30class
126124BinaryOpsTesttensorflow/tensorflow/compiler/tests/binary_ops_test.py39classTest cases for binary operators.
127125BucketizationOpTesttensorflow/tensorflow/compiler/tests/bucketize_op_test.py30class
128126CaseTesttensorflow/tensorflow/compiler/tests/case_test.py31class
129127CategoricalTesttensorflow/tensorflow/compiler/tests/categorical_op_test.py36classTest cases for random-number generating operators.
130128CholeskyOpTesttensorflow/tensorflow/compiler/tests/cholesky_op_test.py35class
131129ClusteringTesttensorflow/tensorflow/compiler/tests/clustering_test.py35class
132130ComplexNumbersDivisionTesttensorflow/tensorflow/compiler/tests/complex_div_test.py35classTest cases for complex numbers division operators.
133131ConcatTesttensorflow/tensorflow/compiler/tests/concat_ops_test.py34class
134132ConcatOffsetTesttensorflow/tensorflow/compiler/tests/concat_ops_test.py335class
135133PackTesttensorflow/tensorflow/compiler/tests/concat_ops_test.py349class
136134CondTesttensorflow/tensorflow/compiler/tests/cond_test.py39class
137135Conv2DTesttensorflow/tensorflow/compiler/tests/conv2d_test.py42class
138136Conv2DBackpropInputTesttensorflow/tensorflow/compiler/tests/conv2d_test.py236class
139137Conv2DBackpropFilterTesttensorflow/tensorflow/compiler/tests/conv2d_test.py534class
140138Conv3DBackpropFilterV2GradTesttensorflow/tensorflow/compiler/tests/conv3d_test.py36class
141139Conv3DTransposeTesttensorflow/tensorflow/compiler/tests/conv3d_test.py69class
142140ConvolutionNodeNameTesttensorflow/tensorflow/compiler/tests/conv_node_name_test.py35classVerify convolution node name match. Verify convolution node names on TPU and CPU match with dilation > 1.
143141XlaDataFormatDimMapTesttensorflow/tensorflow/compiler/tests/data_format_ops_test.py30class
144142XlaPermuteOpTesttensorflow/tensorflow/compiler/tests/data_format_ops_test.py67class
145143GetRunMetadataLabelstensorflow/tensorflow/compiler/tests/dense_layer_test.py36functionReturns all labels in run_metadata.
146144InLabelstensorflow/tensorflow/compiler/tests/dense_layer_test.py45functionReturns true iff one of the labels contains substr.
147145DenseLayerTesttensorflow/tensorflow/compiler/tests/dense_layer_test.py50class
148146ReferenceDepthwiseConv2Dtensorflow/tensorflow/compiler/tests/depthwise_conv_op_test.py35function
149147ConfigsToTesttensorflow/tensorflow/compiler/tests/depthwise_conv_op_test.py64functionIterator for different convolution shapes, strides and paddings. Yields: Tuple (input_size, filter_size, out_size, stride, padding), the depthwise convolution parameters.
150148ConfigsWithDilationsToTesttensorflow/tensorflow/compiler/tests/depthwise_conv_op_test.py91functionIterator for different convolution shapes, strides and paddings. Yields: Tuple (input_size, filter_size, out_size, stride, dilation, padding), the depthwise convolution parameters.
151149CheckGradConfigsToTesttensorflow/tensorflow/compiler/tests/depthwise_conv_op_test.py117functionIterator for different convolution shapes, strides and paddings. compute_gradient_error() is very expensive. So the configs should be relatively small. Yields: Tuple (input_size, filter_size, out_size, stride, padding), the depthwise convolution parameters.
152150DepthwiseConv2DTesttensorflow/tensorflow/compiler/tests/depthwise_conv_op_test.py144class
153151DynamicUpdateSliceOpsTesttensorflow/tensorflow/compiler/tests/dynamic_slice_ops_test.py30class
154152DynamicStitchTesttensorflow/tensorflow/compiler/tests/dynamic_stitch_test.py30class
155153EagerTesttensorflow/tensorflow/compiler/tests/eager_test.py47class
156154EagerFunctionTesttensorflow/tensorflow/compiler/tests/eager_test.py301class
157155ExcessivePaddingTesttensorflow/tensorflow/compiler/tests/eager_test.py721classTest that eager execution works with TPU flattened tensors. Tensors that would normally be excessively padded when written to TPU memory are reshaped to 1-D flat tensors. This test case verifies that such tensors work with eager execution. The flattening currently only happens on TPU, but tests should work fine with all backends as flattening is transparent.
158156multiple_tpustensorflow/tensorflow/compiler/tests/eager_test.py772function
159157MultiDeviceTesttensorflow/tensorflow/compiler/tests/eager_test.py777classTest running TPU computation on more than one core.
160158EinsumOpTesttensorflow/tensorflow/compiler/tests/einsum_op_test.py30classTest cases for einsum op.
161159EnsureShapeOpTesttensorflow/tensorflow/compiler/tests/ensure_shape_op_test.py29class
162160ExtractImagePatchestensorflow/tensorflow/compiler/tests/extract_image_patches_op_test.py29classFunctional tests for ExtractImagePatches op.
163161FakeQuantWithMinMaxArgsTesttensorflow/tensorflow/compiler/tests/fake_quant_ops_test.py27classTest cases for FakeQuantWithMinMaxArgs operation.
164162FakeQuantWithMinMaxArgsGradientTesttensorflow/tensorflow/compiler/tests/fake_quant_ops_test.py125classTest cases for FakeQuantWithMinMaxArgsGradient operation.
165163FakeQuantWithMinMaxVarsTesttensorflow/tensorflow/compiler/tests/fake_quant_ops_test.py226classTest cases for FakeQuantWithMinMaxVars operation.
166164FakeQuantWithMinMaxVarsGradientTesttensorflow/tensorflow/compiler/tests/fake_quant_ops_test.py331classTest cases for FakeQuantWithMinMaxVarsGradient operation.
167165pick_10tensorflow/tensorflow/compiler/tests/fft_test.py38function
168166to_32bittensorflow/tensorflow/compiler/tests/fft_test.py45function
169167FFTTesttensorflow/tensorflow/compiler/tests/fft_test.py60class
170168FIFOQueueTesttensorflow/tensorflow/compiler/tests/fifo_queue_test.py31class
171169FtrlOptimizerTesttensorflow/tensorflow/compiler/tests/ftrl_test.py32class
172170FunctionTesttensorflow/tensorflow/compiler/tests/function_test.py31class
173171FusedBatchNormTesttensorflow/tensorflow/compiler/tests/fused_batchnorm_test.py45class
174172GatherNdTesttensorflow/tensorflow/compiler/tests/gather_nd_op_test.py30class
175173GatherTesttensorflow/tensorflow/compiler/tests/gather_test.py34class
176174GatherBenchmarktensorflow/tensorflow/compiler/tests/gather_test.py158classMicrobenchmarks for the gather op.
177175_generate_numpy_random_rgbtensorflow/tensorflow/compiler/tests/image_ops_test.py40function
178176RGBToHSVTesttensorflow/tensorflow/compiler/tests/image_ops_test.py47class
179177AdjustContrastTesttensorflow/tensorflow/compiler/tests/image_ops_test.py110class
180178AdjustHueTesttensorflow/tensorflow/compiler/tests/image_ops_test.py174class
181179AdjustSaturationTesttensorflow/tensorflow/compiler/tests/image_ops_test.py309class
182180ResizeNearestNeighborTesttensorflow/tensorflow/compiler/tests/image_ops_test.py409class
183181ResizeBilinearTesttensorflow/tensorflow/compiler/tests/image_ops_test.py548class
184182ResizeBilinearGradTesttensorflow/tensorflow/compiler/tests/image_ops_test.py631class
185183ResizeBilinearNonAlignCornersTesttensorflow/tensorflow/compiler/tests/image_ops_test.py713class
186184NonMaxSuppressionTesttensorflow/tensorflow/compiler/tests/image_ops_test.py776class
187185BatchedNonMaxSuppressionCorrectnessTesttensorflow/tensorflow/compiler/tests/image_ops_test.py985class
188186NoRewriteSessionConfigtensorflow/tensorflow/compiler/tests/jit_test.py46function
189187CompiledKerneltensorflow/tensorflow/compiler/tests/jit_test.py56functionExecute 'fn' as a compiled XLA kernel, with 'inputs'.
190188RunMetadataLabelstensorflow/tensorflow/compiler/tests/jit_test.py68functionReturns all labels in run_metadata.
191189InLabelstensorflow/tensorflow/compiler/tests/jit_test.py77functionReturns true iff one of the labels contains substr.
192190MetadataHasXlaRunOptensorflow/tensorflow/compiler/tests/jit_test.py82functionReturns true if there are XlaRun kernels in run_metadata's timeline.
193191JitLaunchTesttensorflow/tensorflow/compiler/tests/jit_test.py89class
194192XlaCompilationTesttensorflow/tensorflow/compiler/tests/jit_test.py279classTests for auto-compilation on CPU/GPU devices.
195193ElementWiseFusionTesttensorflow/tensorflow/compiler/tests/jit_test.py480class
196194LazyCompilationTesttensorflow/tensorflow/compiler/tests/jit_test.py520class
197195ListDiffTesttensorflow/tensorflow/compiler/tests/listdiff_op_test.py31class
198196LRNTesttensorflow/tensorflow/compiler/tests/lrn_ops_test.py39class
199197Cliptensorflow/tensorflow/compiler/tests/lstm.py38functionClips x to the range [-1., 1.].
200198LSTMCellWeightsShapetensorflow/tensorflow/compiler/tests/lstm.py43functionReturns the shape of the weights for a single LSTM cell.
201199LSTMCelltensorflow/tensorflow/compiler/tests/lstm.py50functionUnrolls a single LSTM cell with clipped activations forward by one step. Args: weights: Weight matrix with shape LSTMCellWeightsShape. m_prev: Previous m states with shape [batch_size, num_nodes]. c_prev: Previous c states with shape [batch_size, num_nodes]. x: Input with shape [batch_size, num_inputs]. pad: Padding with shape [batch_size, 1]. Each padding value is either 0 or 1, where 1 indicates padding; i.e. the input is shorter than the sequence length, and the (m, c) states should simply be passed through from the previous states. Returns: The next (m, c) states, each with shape [batch_size, num_nodes].
202200LSTMLayertensorflow/tensorflow/compiler/tests/lstm.py88functionUnrolls a layer of LSTM cells forward by the sequence length. The sequence length is determined by the length of x_seq and pad_seq, which must be the same. Args: cell_name: Base name of each cell. weights: Weight matrix with shape LSTMCellWeightsShape. m: Initial m states with shape [batch_size, num_nodes]. c: Initial c states with shape [batch_size, num_nodes]. x_seq: List of inputs, each with shape [batch_size, num_inputs]. The length of the list is the sequence length. pad_seq: List of paddings, each with shape [batch_size, 1]. The length of the list is the sequence length. Each padding value is either 0 or 1, where 1 indicates padding; i.e. the input is shorter than the sequence length. Returns: List of per-sequence-step outputs, each with shape [batch_size, num_nodes]. Raises: ValueError: If len(x_seq) != len(pad_seq).
203201RandomVartensorflow/tensorflow/compiler/tests/lstm.py121functionReturns a variable of the given shape initialized to random values.
204202RandomInputstensorflow/tensorflow/compiler/tests/lstm.py127functionReturns randomly initialized (x_seq, pad_seq) sequences.
205203BuildLSTMLayertensorflow/tensorflow/compiler/tests/lstm.py140functionBuilds a single LSTM layer with random weights and inputs. Args: batch_size: Inputs are fed in batches of this size. seq_length: The sequence length to unroll the LSTM layer. num_inputs: Dimension of inputs that are fed into each LSTM cell. num_nodes: The number of nodes in each LSTM cell. Returns: (out_seq, weights) pair. The out_seq is a list of per-sequence-step outputs, each with shape [batch_size, num_nodes]. The weights are a list of weight variables that may be trained.
206204_DumpGraphtensorflow/tensorflow/compiler/tests/lstm_test.py40function
207205_Sigmoidtensorflow/tensorflow/compiler/tests/lstm_test.py47function
208206_Cliptensorflow/tensorflow/compiler/tests/lstm_test.py51function
209207LSTMTesttensorflow/tensorflow/compiler/tests/lstm_test.py55class
210208LSTMBenchmarktensorflow/tensorflow/compiler/tests/lstm_test.py238classMcro-benchmarks for a single layer of LSTM cells.
211209ManipOpsTesttensorflow/tensorflow/compiler/tests/manip_ops_test.py30classTest cases for manip ops.
212210MatrixBandPartTesttensorflow/tensorflow/compiler/tests/matrix_band_part_test.py30class
213211zip_to_first_list_lengthtensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py32function
214212repack_diagonalstensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py40function
215213repack_diagonals_in_teststensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py77function
216214square_casestensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py95function
217215tall_casestensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py173function
218216fat_casestensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py261function
219217all_teststensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py329function
220218MatrixDiagTesttensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py333class
221219MatrixSetDiagTesttensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py519class
222220MatrixDiagPartTesttensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py652class
223221InverseOpTesttensorflow/tensorflow/compiler/tests/matrix_inverse_op_test.py31class
224222MatrixSolveOpTesttensorflow/tensorflow/compiler/tests/matrix_solve_op_test.py30class
225223MakePlaceholdertensorflow/tensorflow/compiler/tests/matrix_triangular_solve_op_test.py36function
226224MatrixTriangularSolveOpTesttensorflow/tensorflow/compiler/tests/matrix_triangular_solve_op_test.py40class
227225MomentumOptimizerTesttensorflow/tensorflow/compiler/tests/momentum_test.py33class
228226NAryOpsTesttensorflow/tensorflow/compiler/tests/nary_ops_test.py32class
229227NullaryOpsTesttensorflow/tensorflow/compiler/tests/nullary_ops_test.py29class
230228PlaceholderTesttensorflow/tensorflow/compiler/tests/placeholder_test.py28class
231229_AvgPoolGradtensorflow/tensorflow/compiler/tests/pooling_ops_3d_test.py35function
232230Pooling3DTesttensorflow/tensorflow/compiler/tests/pooling_ops_3d_test.py45class
233231NHWCToNCHWtensorflow/tensorflow/compiler/tests/pooling_ops_test.py33functionConvert the input from NHWC format to NCHW. Args: input_tensor: a 4-D tensor, or a 4-element array representing the same. Returns: the converted tensor or a shape array
234232NCHWToNHWCtensorflow/tensorflow/compiler/tests/pooling_ops_test.py48functionConvert the input from NCHW format to NHWC. Args: input_tensor: a 4-D tensor, or a 4-element array representing the same. Returns: the converted tensor or a shape array
235233GetTestConfigstensorflow/tensorflow/compiler/tests/pooling_ops_test.py63functionGet all the valid tests configs to run. Returns: all the valid test configs
236234PoolingTesttensorflow/tensorflow/compiler/tests/pooling_ops_test.py73class
237235PoolGradTesttensorflow/tensorflow/compiler/tests/pooling_ops_test.py292class
238236ProximalAdagradOptimizerTesttensorflow/tensorflow/compiler/tests/proximal_adagrad_test.py32class
239237ProximalGradientDescentOptimizerTesttensorflow/tensorflow/compiler/tests/proximal_gradient_descent_test.py32class
240238QrOpTesttensorflow/tensorflow/compiler/tests/qr_op_test.py33class
241239QuantizedOpsTesttensorflow/tensorflow/compiler/tests/quantized_ops_test.py36class
242240DequantizedOpsTesttensorflow/tensorflow/compiler/tests/quantized_ops_test.py53class
243241RandomOpsTesttensorflow/tensorflow/compiler/tests/random_ops_test.py34classTest cases for random-number generating operators.
244242ReduceOpsTesttensorflow/tensorflow/compiler/tests/reduce_ops_test.py37class
245243ReduceOpPrecisionTesttensorflow/tensorflow/compiler/tests/reduce_ops_test.py183class
246244ReduceWindowTesttensorflow/tensorflow/compiler/tests/reduce_window_test.py31classTest cases for xla.reduce_window.
247245ReshapeTesttensorflow/tensorflow/compiler/tests/reshape_op_test.py30class
248246ReverseOpsTesttensorflow/tensorflow/compiler/tests/reverse_ops_test.py32class
249247ReverseSequenceTesttensorflow/tensorflow/compiler/tests/reverse_sequence_op_test.py29class
250248RmspropTesttensorflow/tensorflow/compiler/tests/rmsprop_test.py31class
251249numpy_reversetensorflow/tensorflow/compiler/tests/scan_ops_test.py32function
252250handle_optionstensorflow/tensorflow/compiler/tests/scan_ops_test.py43functionAdds tf options to numpy scan ops.
253251CumsumTesttensorflow/tensorflow/compiler/tests/scan_ops_test.py72class
254252CumprodTesttensorflow/tensorflow/compiler/tests/scan_ops_test.py150class
255253_AsTypetensorflow/tensorflow/compiler/tests/scatter_nd_op_test.py31function
256254_FlatInnerDimstensorflow/tensorflow/compiler/tests/scatter_nd_op_test.py35function
257255_FlatOuterDimstensorflow/tensorflow/compiler/tests/scatter_nd_op_test.py42function
258256_NumpyScatterNdtensorflow/tensorflow/compiler/tests/scatter_nd_op_test.py49function
259257_NumpyUpdatetensorflow/tensorflow/compiler/tests/scatter_nd_op_test.py66function
260258ScatterNdTesttensorflow/tensorflow/compiler/tests/scatter_nd_op_test.py71class
261259ScatterNdTensorTesttensorflow/tensorflow/compiler/tests/scatter_nd_op_test.py193class
262260SearchSorteddOpTesttensorflow/tensorflow/compiler/tests/searchsorted_op_test.py28class
263261SegmentReductionOpsTesttensorflow/tensorflow/compiler/tests/segment_reduction_ops_test.py32classTest cases for segment reduction ops.
264262SelfAdjointEigOpTesttensorflow/tensorflow/compiler/tests/self_adjoint_eig_op_test.py32class
265263SliceTesttensorflow/tensorflow/compiler/tests/slice_ops_test.py29class
266264StridedSliceTesttensorflow/tensorflow/compiler/tests/slice_ops_test.py127class
267265XlaSortOpTesttensorflow/tensorflow/compiler/tests/sort_ops_test.py32class
268266space_to_batch_directtensorflow/tensorflow/compiler/tests/spacetobatch_op_test.py30functionDirect Python implementation of space-to-batch conversion. This is used for tests only. Args: input_array: N-D array block_shape: 1-D array of shape [num_block_dims]. paddings: 2-D array of shape [num_block_dims, 2]. Returns: Converted tensor.
269267SpaceToBatchTesttensorflow/tensorflow/compiler/tests/spacetobatch_op_test.py71classTests input-output pairs for the SpaceToBatch and BatchToSpace ops.
270268SpaceToBatchNDTesttensorflow/tensorflow/compiler/tests/spacetobatch_op_test.py152classTests input-output pairs for the SpaceToBatchND and BatchToSpaceND ops.
271269_SparseToDensetensorflow/tensorflow/compiler/tests/sparse_to_dense_op_test.py31function
272270SparseToDenseTesttensorflow/tensorflow/compiler/tests/sparse_to_dense_op_test.py46class
273271_igammatensorflow/tensorflow/compiler/tests/special_math_test.py48function
274272_igammactensorflow/tensorflow/compiler/tests/special_math_test.py53function
275273implicit_reparameterization_gradtensorflow/tensorflow/compiler/tests/special_math_test.py58function
276274_log1ptensorflow/tensorflow/compiler/tests/special_math_test.py65function
277275Log1pTesttensorflow/tensorflow/compiler/tests/special_math_test.py69class
278276IgammaTesttensorflow/tensorflow/compiler/tests/special_math_test.py139class
279277IgammacTesttensorflow/tensorflow/compiler/tests/special_math_test.py324class
280278StackOpTesttensorflow/tensorflow/compiler/tests/stack_ops_test.py32class
281279xla_devicetensorflow/tensorflow/compiler/tests/stateful_random_ops_test.py41function
282280xla_device_nametensorflow/tensorflow/compiler/tests/stateful_random_ops_test.py55function
283281StatefulRandomOpsTesttensorflow/tensorflow/compiler/tests/stateful_random_ops_test.py64classTest cases for stateful random-number generator operators.
284282StatelessRandomOpsTesttensorflow/tensorflow/compiler/tests/stateless_random_ops_test.py33classTest cases for stateless random-number generator operators.
285283StatelessRandomOpsBenchmarktensorflow/tensorflow/compiler/tests/stateless_random_ops_test.py136classMicrobenchmarks for the stateless random ops.
286284SvdOpTesttensorflow/tensorflow/compiler/tests/svd_op_test.py33class
287285_make_convertertensorflow/tensorflow/compiler/tests/tensor_array_ops_test.py42function
288286TensorArrayTesttensorflow/tensorflow/compiler/tests/tensor_array_ops_test.py53class
289287ListOpsTesttensorflow/tensorflow/compiler/tests/tensor_list_ops_test.py34class
290288TernaryOpsTesttensorflow/tensorflow/compiler/tests/ternary_ops_test.py34class
291289ConvertBetweenDataFormatstensorflow/tensorflow/compiler/tests/test_utils.py26functionConverts 4D tensor between data formats.
292290PermuteDimsBetweenDataFormatstensorflow/tensorflow/compiler/tests/test_utils.py47functionGet new shape for converting between data formats.
293291RunWithWarmuptensorflow/tensorflow/compiler/tests/test_utils.py71functionRuns a graph a few times to ensure that its clusters are compiled.
294292_tfconsttensorflow/tensorflow/compiler/tests/tridiagonal_solve_ops_test.py39function
295293_tf_onestensorflow/tensorflow/compiler/tests/tridiagonal_solve_ops_test.py43function
296294TridiagonalSolveOpsTesttensorflow/tensorflow/compiler/tests/tridiagonal_solve_ops_test.py47classTest for tri-diagonal matrix related ops.
297295nhwc_to_formattensorflow/tensorflow/compiler/tests/unary_ops_test.py37functionConverts a numpy array from NHWC format to `data_format`.
298296UnaryOpsTesttensorflow/tensorflow/compiler/tests/unary_ops_test.py48classTest cases for unary operators.
299297UnstackOpTesttensorflow/tensorflow/compiler/tests/unstack_test.py29class
300298VariableOpsTesttensorflow/tensorflow/compiler/tests/variable_ops_test.py40classTest cases for resource variable operators.
301299StridedSliceAssignCheckertensorflow/tensorflow/compiler/tests/variable_ops_test.py422classCompares the results of a slice assignment using Tensorflow and numpy.
302300SliceAssignTesttensorflow/tensorflow/compiler/tests/variable_ops_test.py451class
303301WhileTesttensorflow/tensorflow/compiler/tests/while_test.py39class
304302is_compile_on_demandtensorflow/tensorflow/compiler/tests/while_test.py260function
305303XlaDeviceGpuTesttensorflow/tensorflow/compiler/tests/xla_device_gpu_test.py28class
306304XlaDeviceTesttensorflow/tensorflow/compiler/tests/xla_device_test.py32class
307305XlaOpsNumericalTesttensorflow/tensorflow/compiler/tests/xla_ops_test.py37class
308306XlaOpsShapeInferenceTesttensorflow/tensorflow/compiler/tests/xla_ops_test.py366class
309307parse_disabled_manifesttensorflow/tensorflow/compiler/tests/xla_test.py55function
310308XLATestCasetensorflow/tensorflow/compiler/tests/xla_test.py81classXLA test cases are parameterized test cases.
311309Benchmarktensorflow/tensorflow/compiler/tests/xla_test.py250functionBuild a graph and run benchmarks against it, with or without XLA. Args: tf_bench: An instance of tf.test.Benchmark, used to run the benchmark. builder_fn: A function that builds a graph when invoked, and returns (name, fetches), where name is the name of the test, and fetches is a list of tensors to fetch as output. use_xla_jit: If true compile with the XLA JIT, otherwise use regular TF. device: The tensorflow device to run on, e.g. "cpu", "gpu". separate_compiled_gradients: If true put each gradient subgraph into a separate compilation scope. This gives fine-grained control over which portions of the graph will be compiled as a single unit. Compiling gradients separately may yield better performance for some graphs. The scope is named based on the scope of the forward computation as well as the name of the gradients. As a result, the gradients will be compiled in a scope that is separate from both the forward computation, and from other gradients.
312310XlaTestCaseTestCasetensorflow/tensorflow/compiler/tests/xla_test_test.py25class
313311_unary_optensorflow/tensorflow/compiler/tf2xla/python/xla.py70functionWrapper that restricts `fn` to have the correct signature.
314312_broadcasting_binary_optensorflow/tensorflow/compiler/tf2xla/python/xla.py119functionWraps a binary Tensorflow operator and performs XLA-style broadcasting.
315313_shift_right_logical_helpertensorflow/tensorflow/compiler/tf2xla/python/xla.py152functionPerforms an integer right logical shift irrespective of input type.
316314_shift_right_arithmetic_helpertensorflow/tensorflow/compiler/tf2xla/python/xla.py167functionPerforms an integer right arithmetic shift irrespective of input type.
317315_binary_optensorflow/tensorflow/compiler/tf2xla/python/xla.py211functionWrapper that restricts `fn` to have the correct signature.
318316broadcasttensorflow/tensorflow/compiler/tf2xla/python/xla.py226function
319317clamptensorflow/tensorflow/compiler/tf2xla/python/xla.py234function
320318convtensorflow/tensorflow/compiler/tf2xla/python/xla.py241functionWraps the XLA ConvGeneralDilated operator. ConvGeneralDilated is the most general form of XLA convolution and is documented at https://www.tensorflow.org/performance/xla/operation_semantics#conv_convolution Args: lhs: the input tensor rhs: the kernel tensor window_strides: the inter-window strides padding: the padding to apply at the start and end of each input dimensions lhs_dilation: dilation to apply between input elements rhs_dilation: dilation to apply between kernel elements dimension_numbers: a `ConvolutionDimensionNumbers` proto. feature_group_count: number of feature groups for grouped convolution. precision_config: a `xla.PrecisionConfig` proto. name: an optional name for the operator Returns: A tensor representing the output of the convolution.
321319dottensorflow/tensorflow/compiler/tf2xla/python/xla.py291function
322320dot_generaltensorflow/tensorflow/compiler/tf2xla/python/xla.py295function
323321self_adjoint_eigtensorflow/tensorflow/compiler/tf2xla/python/xla.py307function
324322svdtensorflow/tensorflow/compiler/tf2xla/python/xla.py311function
325323random_normaltensorflow/tensorflow/compiler/tf2xla/python/xla.py327function
326324random_uniformtensorflow/tensorflow/compiler/tf2xla/python/xla.py333function
327325reduce_windowtensorflow/tensorflow/compiler/tf2xla/python/xla.py343functionWraps the XLA ReduceWindow operator. ReduceWindow is documented at https://www.tensorflow.org/performance/xla/operation_semantics#reducewindow . Args: operand: the input tensor init: a scalar tensor representing the initial value for the reduction reducer: a reduction function that combines a pair of scalars. window_dimensions: shape of the window, as a list of integers window_strides: inter-window strides, as a list of integers. Optional; if omitted, defaults to strides of 1. padding: padding to apply to 'operand'. List of (low, high) pairs of integers that specify the padding to apply before and after each dimension. Optional; if omitted, defaults to no padding. name: the operator name, or None. Returns: A tensor that represents the output of the reduce_window operator.
328326reshapetensorflow/tensorflow/compiler/tf2xla/python/xla.py391function
329327selecttensorflow/tensorflow/compiler/tf2xla/python/xla.py398function
330328slicetensorflow/tensorflow/compiler/tf2xla/python/xla.py406function
331329_sharding_gradtensorflow/tensorflow/compiler/tf2xla/python/xla.py418function
332330_spmd_full_to_shard_shape_gradtensorflow/tensorflow/compiler/tf2xla/python/xla.py431function
333331_spmd_shard_to_full_shape_gradtensorflow/tensorflow/compiler/tf2xla/python/xla.py440function
334332gathertensorflow/tensorflow/compiler/tf2xla/python/xla.py452function
335333scattertensorflow/tensorflow/compiler/tf2xla/python/xla.py463function
336334Shardingtensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py28classA class to support adding sharding attributes to Ops. Use the factory constructors and then call apply_to_tensor: Sharding.replicate().apply_to_tensor(tensor)
337335replicatetensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py179function
338336assign_devicetensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py188functionReturns a tensor that has AssignDevice sharding attribute.
339337tiletensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py202functionReturns a tensor that has tiled sharding. Args: tensor: A tf.Tensor to shard. tile_assignment: An np.ndarray describing the topology of the tiling and which device will compute which part of the topology. assign_tuple_sharding: If the sharding type should be a tuple. use_sharding_op: If true, adds a sharding op to set the sharding.
340338splittensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py224functionReturns a tensor that is split along the given dimension. Args: tensor: A tf.Tensor to split. split_dimension: The dimension to split. num_devices: The number of devices to partition the dimension. assign_tuple_sharding: If the sharding type should be a tuple. use_sharding_op: If true, adds a sharding op to set the sharding. input_shape: The full shape of the input tensor.
341339get_op_shardingtensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py248functionReturns sharding attribute of an op. Args: op: a TensorFlow op. Returns: The attribute representing XLA sharding on this op.
342340auto_to_manual_spmd_partitiontensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py260functionSwitches from automatic SPMD partitioning to manual partitioning. Converts a full-shaped tensor (to be automatically partitioned by SPMD partitioner) to a shard-shaped tensor to be consumed by manually partitioned ops. Args: tensor: A tf.Tensor in full shape. manual_sharding: a serialized string of OpSharding to be used in manual partitioning. Returns: A shard-shaped tensor to be consumed by manually partitioned ops.
343341manual_to_auto_spmd_partitiontensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py279functionSwitches from manual partitioning to automatic SPMD partitioning. Converts a shard-shaped tensor (manually partitioned in SPMD-style) to a full-shaped tensor to be partitioned automatically by the SPMD partitioner. Args: tensor: A tf.Tensor in shard shape. manual_sharding: a serialized string of OpSharding to be used in manual partitioning. full_shape: the shape of tensor before partitioning. Returns: A full-shaped tensor to be partitioned automatically by the SPMD partitioner.
344342numpy_assert_allclosetensorflow/tensorflow/compiler/xla/python/bfloat16_test.py35function
345343Bfloat16Testtensorflow/tensorflow/compiler/xla/python/bfloat16_test.py53classTests the non-numpy Python methods of the bfloat16 type.
346344Bfloat16NumPyTesttensorflow/tensorflow/compiler/xla/python/bfloat16_test.py251classTests the NumPy integration of the bfloat16 type.
347345_interpreter_backend_factorytensorflow/tensorflow/compiler/xla/python/xla_client.py58function
348346_cpu_backend_factorytensorflow/tensorflow/compiler/xla/python/xla_client.py62function
349347_gpu_backend_factorytensorflow/tensorflow/compiler/xla/python/xla_client.py66functionReturns a GPU backend. BFC allocator is used by default.
350348register_local_backend_factorytensorflow/tensorflow/compiler/xla/python/xla_client.py101function
351349_get_local_backendstensorflow/tensorflow/compiler/xla/python/xla_client.py108functionInstantiates all known local backends.
352350get_local_backendtensorflow/tensorflow/compiler/xla/python/xla_client.py131functionReturns a local backend. Args: name: the backend name. If `None`, a default local backend is returned, typically `gpu` if one is present, or `cpu` if not. If a string, the named backend is returned or an exception raised. Returns: A LocalBackend object.
353351OpMetadatatensorflow/tensorflow/compiler/xla/python/xla_client.py152classPython representation of a xla.OpMetadata protobuf.
354352CurrentSourceInfoMetadatatensorflow/tensorflow/compiler/xla/python/xla_client.py163functionHelper for use in source mapping that returns an OpMetadata object.
355353dtype_to_etypetensorflow/tensorflow/compiler/xla/python/xla_client.py206functionConvenience function for reading DTYPE_TO_XLA_ELEMENT_TYPE.
356354shape_from_pyvaltensorflow/tensorflow/compiler/xla/python/xla_client.py272functionReturns a Shape that describes a tuple-tree of Numpy arrays.
357355execute_with_python_valuestensorflow/tensorflow/compiler/xla/python/xla_client.py334functionExecute on one replica with Python values as arguments and output.
358356execute_with_python_values_replicatedtensorflow/tensorflow/compiler/xla/python/xla_client.py345functionExecute on many replicas with Python values as arguments and output. Arguments: executable: the program to run. arguments: a list of lists of Python values indexed by `[replica][arg_num]` to pass as inputs. backend: the backend we are targeting. Returns: A list of python values, one per replica.
359357PaddingTypetensorflow/tensorflow/compiler/xla/python/xla_client.py374class
360358window_padding_type_to_pad_valuestensorflow/tensorflow/compiler/xla/python/xla_client.py379functionMaps PaddingType or string to pad values (list of pairs of ints).
361359register_custom_call_targettensorflow/tensorflow/compiler/xla/python/xla_client.py418functionRegisters a custom call target. Args: name: bytes containing the name of the function. fn: a PyCapsule object containing the function pointer. platform: the target platform.
362360PaddingConfigDimensiontensorflow/tensorflow/compiler/xla/python/xla_client.py433classPython representation of a xla.PaddingConfigDimension protobuf.
363361PaddingConfigtensorflow/tensorflow/compiler/xla/python/xla_client.py443classPython representation of a xla.PaddingConfig protobuf.
364362make_padding_configtensorflow/tensorflow/compiler/xla/python/xla_client.py451functionCreate PaddingConfig proto from list of triples of integers. Args: padding_config: either a PaddingConfig or a list of integer triples (edge_padding_low, edge_padding_high, interior_padding) representing the configuration of the padding operation. Returns: A `PaddingConfig` object.
365363DotDimensionNumberstensorflow/tensorflow/compiler/xla/python/xla_client.py476classPython representation of a xla.DotDimensionNumbers protobuf.
366364make_dot_dimension_numberstensorflow/tensorflow/compiler/xla/python/xla_client.py488functionBuilds a DotDimensionNumbers object from a specification. Args: dimension_numbers: either a `DotDimensionNumbers` or a nested tuple `((lhs_contract, rhs_contract), (lhs_batch, rhs_batch))` of lists of integers representing the dimensions to treat as contracting dimensions and batch dimensions on each input operand. Returns: A `DotDimensionNumbers` object.
367365ConvolutionDimensionNumberstensorflow/tensorflow/compiler/xla/python/xla_client.py516classPython representation of a xla.ConvolutionDimensionNumbers protobuf.
368366make_convolution_dimension_numberstensorflow/tensorflow/compiler/xla/python/xla_client.py536functionBuilds a ConvolutionDimensionNumbers object from a specification. Args: dimension_numbers: optional, either a ConvolutionDimensionNumbers object or a tuple (lhs_spec, rhs_spec, out_spec). Each element is a string of length N+2 identifying by position: (1) batch dimensions in lhs, rhs, and the output with the character 'N', (2) feature dimensions in lhs and the output with the character 'C', (3) input and output feature dimensions in rhs with the characters 'I' and 'O' respectively, and (4) spatial dimension correspondences between lhs, rhs, and the output using any distinct characters. For example, to indicate dimension numbers consistent with the Conv operation with two spatial dimensions, one could use ('NCHW', 'OIHW', 'NCHW'). As another example, to indicate dimension numbers consistent with the TensorFlow Conv2D operation, one could use ('NHWC', 'HWIO', 'NHWC'). When using the latter form of convolution dimension specification, window strides are associated with spatial dimension character labels according to the order in which the labels appear in the rhs_spec string, so that window_strides[0] is matched with the dimension corresponding to the first character appearing in rhs_spec that is not 'I' or 'O'. By default, use the same dimension numbering as Conv and ConvWithGeneralPadding. num_spatial_dimensions: the number of spatial dimensions. Returns: A `ConvolutionDimensionNumbers` object.
369367OpShardingtensorflow/tensorflow/compiler/xla/python/xla_client.py600classPython representation of a xla.OpSharding protobuf.
370368PrecisionConfigtensorflow/tensorflow/compiler/xla/python/xla_client.py614classPython representation of a xla.PrecisionConfig protobuf.
371369GatherDimensionNumberstensorflow/tensorflow/compiler/xla/python/xla_client.py624classPython representation of a xla.GatherDimensionNumbers protobuf.
372370ScatterDimensionNumberstensorflow/tensorflow/compiler/xla/python/xla_client.py636classPython representation of a xla.ScatterDimensionNumbers protobuf.
373371ReplicaGrouptensorflow/tensorflow/compiler/xla/python/xla_client.py648classPython representation of a xla.ReplicaGroup protobuf.
374372_make_replica_group_prototensorflow/tensorflow/compiler/xla/python/xla_client.py656function
375373make_replica_groupstensorflow/tensorflow/compiler/xla/python/xla_client.py662function
376374tracebackstensorflow/tensorflow/compiler/xla/python/xla_client.py677functionContext manager that enables or disables traceback collection.
377375heap_profiletensorflow/tensorflow/compiler/xla/python/xla_client.py687functionReturns a gzipped pprof protocol buffer containing a heap profile.
378376TestFactorytensorflow/tensorflow/compiler/xla/python/xla_client_test.py56function
379377InstantiateTeststensorflow/tensorflow/compiler/xla/python/xla_client_test.py2103function
380378TpuBackendtensorflow/tensorflow/compiler/xla/python/tpu_driver/client/tpu_client.py29classXLA backend implemented using the Tpu driver API.
381379ConvertLiteralToNumpyArraytensorflow/tensorflow/compiler/xla/python_api/xla_literal.py28functionConverts a XLA literal to a Numpy array.
382380_ConvertNumpyArrayToLiteraltensorflow/tensorflow/compiler/xla/python_api/xla_literal.py64functionConverts a Numpy array to a XLA literal.
383381ConvertNumpyArrayToLiteraltensorflow/tensorflow/compiler/xla/python_api/xla_literal.py85functionConverts a Numpy array or a nested tuple thereof to an XLA literal.
384382Shapetensorflow/tensorflow/compiler/xla/python_api/xla_shape.py29classWraps a xla_data_pb2.ShapeProto message with a convenient Python type. Provides direct access to the underlying xla_data_pb2.ShapeProto message in the message attribute, along with accessor wrappers to the message's fields. Avoid direct access to .message unless interacting directly with protobuf APIs like CopyFrom. In other words, prefer hauling the shape around in a Shape, and only access .message when strictly required by the protobuf API.
385383_CreateShapeFromNumpytensorflow/tensorflow/compiler/xla/python_api/xla_shape.py103functionCreate a Shape from a given Numpy array. Args: ndarray: Numpy array. Returns: A Shape object.
386384CreateShapeFromNumpytensorflow/tensorflow/compiler/xla/python_api/xla_shape.py129functionCreate a Shape from a Numpy array or a nested tuple structure thereof. Args: value: Numpy array or (possibly nested) tuple structure that bottoms out in Numpy arrays. Returns: A Shape object.
387385CreateShapeFromDtypeAndTupletensorflow/tensorflow/compiler/xla/python_api/xla_shape.py147functionCreate a shape from a Numpy dtype and a sequence of nonnegative integers. Args: dtype: a numpy dtype, e.g. np.dtype('int32'). shape_tuple: a sequence of nonnegative integers. Returns: A Shape object.
388386RamFilesystemTesttensorflow/tensorflow/core/platform/ram_file_system_test.py38class
389387AddOneTesttensorflow/tensorflow/examples/adding_an_op/cuda_op_test.py25class
390388FactTesttensorflow/tensorflow/examples/adding_an_op/fact_test.py25class
391389ZeroOut1Testtensorflow/tensorflow/examples/adding_an_op/zero_out_1_test.py29class
392390ZeroOut2Testtensorflow/tensorflow/examples/adding_an_op/zero_out_2_test.py30class
393391ZeroOut3Testtensorflow/tensorflow/examples/adding_an_op/zero_out_3_test.py27class
394392_zero_out_gradtensorflow/tensorflow/examples/adding_an_op/zero_out_grad_2.py28functionThe gradients for `zero_out`. Args: op: The `zero_out` `Operation` that we are differentiating, which we can use to find the inputs and outputs of the original op. grad: Gradient with respect to the output of the `zero_out` op. Returns: Gradients with respect to the input of `zero_out`.
395393load_graphtensorflow/tensorflow/examples/label_image/label_image.py26function
396394read_tensor_from_image_filetensorflow/tensorflow/examples/label_image/label_image.py38function
397395load_labelstensorflow/tensorflow/examples/label_image/label_image.py65function
398396maintensorflow/tensorflow/examples/saved_model/integration_tests/deploy_mnist_cnn.py47function
399397MaybeDistributionScopetensorflow/tensorflow/examples/saved_model/integration_tests/distribution_strategy_utils.py48classProvides a context allowing no distribution strategy.
400398make_feature_extractortensorflow/tensorflow/examples/saved_model/integration_tests/export_mnist_cnn.py56functionReturns a Keras Model to compute a feature vector from MNIST images.
401399set_feature_extractor_hparamstensorflow/tensorflow/examples/saved_model/integration_tests/export_mnist_cnn.py72function
402400make_classifiertensorflow/tensorflow/examples/saved_model/integration_tests/export_mnist_cnn.py76functionReturns a Keras Model to classify MNIST using feature_extractor.
403401wrap_keras_model_for_exporttensorflow/tensorflow/examples/saved_model/integration_tests/export_mnist_cnn.py87functionWraps `model` for saving and loading as SavedModel.
404402_get_traced_losstensorflow/tensorflow/examples/saved_model/integration_tests/export_mnist_cnn.py144functionReturns tf.function for model.losses[i] with a trace for zero args. The intended usage is [_get_traced_loss(model, i) for i in range(len(model.losses))] This is better than [tf.function(lambda: model.losses[i], input_signature=[]) for i ...] because it avoids capturing a loop index in a lambda, and removes any chance of deferring the trace. Args: model: a Keras Model. i: an integer between from 0 up to but to len(model.losses).
405403maintensorflow/tensorflow/examples/saved_model/integration_tests/export_mnist_cnn.py163function
406404maintensorflow/tensorflow/examples/saved_model/integration_tests/export_rnn_cell.py32function
407405write_vocabulary_filetensorflow/tensorflow/examples/saved_model/integration_tests/export_simple_text_embedding.py34functionWrite temporary vocab file for module construction.
408406TextEmbeddingModeltensorflow/tensorflow/examples/saved_model/integration_tests/export_simple_text_embedding.py44classText embedding model. A text embeddings model that takes a sentences on input and outputs the sentence embedding.
409407maintensorflow/tensorflow/examples/saved_model/integration_tests/export_simple_text_embedding.py96function
410408TextRnnModeltensorflow/tensorflow/examples/saved_model/integration_tests/export_text_rnn_model.py31classText RNN model. A full generative text RNN model that can train and decode sentences from a starting word.
411409maintensorflow/tensorflow/examples/saved_model/integration_tests/export_text_rnn_model.py170function
412410TestCasetensorflow/tensorflow/examples/saved_model/integration_tests/integration_scripts.py42classBase class to write SavedModel integration tests.
413411MaybeRunScriptInsteadtensorflow/tensorflow/examples/saved_model/integration_tests/integration_scripts.py62function
414412_load_random_datatensorflow/tensorflow/examples/saved_model/integration_tests/mnist_util.py28function
415413load_reshaped_datatensorflow/tensorflow/examples/saved_model/integration_tests/mnist_util.py34functionReturns MNIST or Fashion MNIST or fake train and test data.
416414_prepare_imagetensorflow/tensorflow/examples/saved_model/integration_tests/mnist_util.py44functionConverts images to [n,h,w,c] format in range [0,1].
417415_prepare_labeltensorflow/tensorflow/examples/saved_model/integration_tests/mnist_util.py49functionConerts labels to one-hot encoding.
418416SavedModelTesttensorflow/tensorflow/examples/saved_model/integration_tests/saved_model_test.py32class
419417make_feature_extractortensorflow/tensorflow/examples/saved_model/integration_tests/use_mnist_cnn.py72functionLoad a pre-trained feature extractor and wrap it for use in Keras.
420418make_classifiertensorflow/tensorflow/examples/saved_model/integration_tests/use_mnist_cnn.py100functionReturns a Keras Model to classify MNIST using feature_extractor.
421419maintensorflow/tensorflow/examples/saved_model/integration_tests/use_mnist_cnn.py112function
422420traintensorflow/tensorflow/examples/saved_model/integration_tests/use_model_in_sequential_keras.py35functionBuild a Keras model and train with mock data.
423421maintensorflow/tensorflow/examples/saved_model/integration_tests/use_model_in_sequential_keras.py67function
424422maintensorflow/tensorflow/examples/saved_model/integration_tests/use_rnn_cell.py33function
425423traintensorflow/tensorflow/examples/saved_model/integration_tests/use_text_embedding_in_dataset.py34functionBuild a Keras model and train with mock data.
426424maintensorflow/tensorflow/examples/saved_model/integration_tests/use_text_embedding_in_dataset.py65function
427425maintensorflow/tensorflow/examples/saved_model/integration_tests/use_text_rnn_model.py32function
428426StreamingAccuracyStatstensorflow/tensorflow/examples/speech_commands/accuracy_utils.py24classGet streaming accuracy statistics every time a new command is founded. Attributes: _how_many_gt: How many ground truths. _how_many_gt_matched: How many ground truths have been matched. _how_many_fp: How many commands have been fired as false positive. _how_many_c: How many commands have been fired correctly. _how_many_w: How many commands have been fired wrongly. _gt_occurrence: A list to record which commands and when it occurs in the input audio stream. _previous_c: A variable to record the last status of _how_many_c. _previous_w: A variable to record the last status of _how_many_w. _previous_fp: A variable to record the last status of _how_many_fp.
429427create_inference_graphtensorflow/tensorflow/examples/speech_commands/freeze.py63functionCreates an audio model with the nodes needed for inference. Uses the supplied arguments to create a model, and inserts the input and output nodes that are needed to use the graph for inference. Args: wanted_words: Comma-separated list of the words we're trying to recognize. sample_rate: How many samples per second are in the input audio files. clip_duration_ms: How many samples to analyze for the audio pattern. clip_stride_ms: How often to run recognition. Useful for models with cache. window_size_ms: Time slice duration to estimate frequencies from. window_stride_ms: How far apart time slices should be. feature_bin_count: Number of frequency bands to analyze. model_architecture: Name of the kind of model to generate. preprocess: How the spectrogram is processed to produce features, for example 'mfcc', 'average', or 'micro'. Returns: Input and output tensor objects. Raises: Exception: If the preprocessing mode isn't recognized.
430428save_graph_deftensorflow/tensorflow/examples/speech_commands/freeze.py161functionWrites a graph def file out to disk. Args: file_name: Where to save the file. frozen_graph_def: GraphDef proto object to save.
431429save_saved_modeltensorflow/tensorflow/examples/speech_commands/freeze.py176functionWrites a SavedModel out to disk. Args: file_name: Where to save the file. sess: TensorFlow session containing the graph. input_tensor: Tensor object defining the input's properties. output_tensor: Tensor object defining the output's properties.
432430maintensorflow/tensorflow/examples/speech_commands/freeze.py211function
433431FreezeTesttensorflow/tensorflow/examples/speech_commands/freeze_test.py30class
434432mix_in_audio_sampletensorflow/tensorflow/examples/speech_commands/generate_streaming_test_wav.py55functionMixes the sample data into the main track at the specified offset. Args: track_data: Numpy array holding main audio data. Modified in-place. track_offset: Where to mix the sample into the main track. sample_data: Numpy array of audio data to mix into the main track. sample_offset: Where to start in the audio sample. clip_duration: How long the sample segment is. sample_volume: Loudness to mix the sample in at. ramp_in: Length in samples of volume increase stage. ramp_out: Length in samples of volume decrease stage.
435433maintensorflow/tensorflow/examples/speech_commands/generate_streaming_test_wav.py86function
436434GenerateStreamingTestWavTesttensorflow/tensorflow/examples/speech_commands/generate_streaming_test_wav_test.py27class
437435prepare_words_listtensorflow/tensorflow/examples/speech_commands/input_data.py58functionPrepends common tokens to the custom word list. Args: wanted_words: List of strings containing the custom words. Returns: List with the standard silence and unknown tokens added.
438436which_settensorflow/tensorflow/examples/speech_commands/input_data.py70functionDetermines which data partition the file should belong to. We want to keep files in the same training, validation, or testing sets even if new ones are added over time. This makes it less likely that testing samples will accidentally be reused in training when long runs are restarted for example. To keep this stability, a hash of the filename is taken and used to determine which set it should belong to. This determination only depends on the name and the set proportions, so it won't change as other files are added. It's also useful to associate particular files as related (for example words spoken by the same person), so anything after '_nohash_' in a filename is ignored for set determination. This ensures that 'bobby_nohash_0.wav' and 'bobby_nohash_1.wav' are always in the same set, for example. Args: filename: File path of the data sample. validation_percentage: How much of the data set to use for validation. testing_percentage: How much of the data set to use for testing. Returns: String, one of 'training', 'validation', or 'testing'.
439437load_wav_filetensorflow/tensorflow/examples/speech_commands/input_data.py118functionLoads an audio file and returns a float PCM-encoded array of samples. Args: filename: Path to the .wav file to load. Returns: Numpy array holding the sample data as floats between -1.0 and 1.0.
440438save_wav_filetensorflow/tensorflow/examples/speech_commands/input_data.py136functionSaves audio sample data to a .wav audio file. Args: filename: Path to save the file to. wav_data: 2D array of float PCM-encoded audio data. sample_rate: Samples per second to encode in the file.
441439get_features_rangetensorflow/tensorflow/examples/speech_commands/input_data.py160functionReturns the expected min/max for generated features. Args: model_settings: Information about the current model being trained. Returns: Min/max float pair holding the range of features. Raises: Exception: If preprocessing mode isn't recognized.
442440AudioProcessortensorflow/tensorflow/examples/speech_commands/input_data.py190classHandles loading, partitioning, and preparing audio training data.
443441InputDataTesttensorflow/tensorflow/examples/speech_commands/input_data_test.py33class
444442load_graphtensorflow/tensorflow/examples/speech_commands/label_wav.py43functionUnpersists graph from file as default graph.
445443load_labelstensorflow/tensorflow/examples/speech_commands/label_wav.py51functionRead in labels, one label per line.
446444run_graphtensorflow/tensorflow/examples/speech_commands/label_wav.py56functionRuns the audio data through the graph and prints predictions.
447445label_wavtensorflow/tensorflow/examples/speech_commands/label_wav.py77functionLoads the model and labels, and runs the inference to print predictions.
448446maintensorflow/tensorflow/examples/speech_commands/label_wav.py98functionEntry point for script, converts flags to arguments.
449447load_graphtensorflow/tensorflow/examples/speech_commands/label_wav_dir.py44functionUnpersists graph from file as default graph.
450448load_labelstensorflow/tensorflow/examples/speech_commands/label_wav_dir.py52functionRead in labels, one label per line.
451449run_graphtensorflow/tensorflow/examples/speech_commands/label_wav_dir.py57functionRuns the audio data through the graph and prints predictions.
452450label_wavtensorflow/tensorflow/examples/speech_commands/label_wav_dir.py85functionLoads the model and labels, and runs the inference to print predictions.
453451maintensorflow/tensorflow/examples/speech_commands/label_wav_dir.py101functionEntry point for script, converts flags to arguments.
454452LabelWavTesttensorflow/tensorflow/examples/speech_commands/label_wav_test.py29class
455453_next_power_of_twotensorflow/tensorflow/examples/speech_commands/models.py27functionCalculates the smallest enclosing power of two for an input. Args: x: Positive float or integer number. Returns: Next largest power of two integer.
456454prepare_model_settingstensorflow/tensorflow/examples/speech_commands/models.py39functionCalculates common settings needed for all models. Args: label_count: How many classes are to be recognized. sample_rate: Number of audio samples per second. clip_duration_ms: Length of each audio clip to be analyzed. window_size_ms: Duration of frequency analysis window. window_stride_ms: How far to move in time between frequency windows. feature_bin_count: Number of frequency bins to use for analysis. preprocess: How the spectrogram is processed to produce features. Returns: Dictionary containing common settings. Raises: ValueError: If the preprocessing mode isn't recognized.
457455create_modeltensorflow/tensorflow/examples/speech_commands/models.py95functionBuilds a model of the requested architecture compatible with the settings. There are many possible ways of deriving predictions from a spectrogram input, so this function provides an abstract interface for creating different kinds of models in a black-box way. You need to pass in a TensorFlow node as the 'fingerprint' input, and this should output a batch of 1D features that describe the audio. Typically this will be derived from a spectrogram that's been run through an MFCC, but in theory it can be any feature vector of the size specified in model_settings['fingerprint_size']. The function will build the graph it needs in the current TensorFlow graph, and return the tensorflow output that will contain the 'logits' input to the softmax prediction process. If training flag is on, it will also return a placeholder node that can be used to control the dropout amount. See the implementations below for the possible model architectures that can be requested. Args: fingerprint_input: TensorFlow node that will output audio feature vectors. model_settings: Dictionary of information about the model. model_architecture: String specifying which kind of model to create. is_training: Whether the model is going to be used for training. runtime_settings: Dictionary of information about the runtime. Returns: TensorFlow node outputting logits results, and optionally a dropout placeholder. Raises: Exception: If the architecture type isn't recognized.
458456load_variables_from_checkpointtensorflow/tensorflow/examples/speech_commands/models.py153functionUtility function to centralize checkpoint restoration. Args: sess: TensorFlow session. start_checkpoint: Path to saved checkpoint on disk.
459457create_single_fc_modeltensorflow/tensorflow/examples/speech_commands/models.py164functionBuilds a model with a single hidden fully-connected layer. This is a very simple model with just one matmul and bias layer. As you'd expect, it doesn't produce very accurate results, but it is very fast and simple, so it's useful for sanity testing. Here's the layout of the graph: (fingerprint_input) v [MatMul]<-(weights) v [BiasAdd]<-(bias) v Args: fingerprint_input: TensorFlow node that will output audio feature vectors. model_settings: Dictionary of information about the model. is_training: Whether the model is going to be used for training. Returns: TensorFlow node outputting logits results, and optionally a dropout placeholder.
460458create_conv_modeltensorflow/tensorflow/examples/speech_commands/models.py207functionBuilds a standard convolutional model. This is roughly the network labeled as 'cnn-trad-fpool3' in the 'Convolutional Neural Networks for Small-footprint Keyword Spotting' paper: http://www.isca-speech.org/archive/interspeech_2015/papers/i15_1478.pdf Here's the layout of the graph: (fingerprint_input) v [Conv2D]<-(weights) v [BiasAdd]<-(bias) v [Relu] v [MaxPool] v [Conv2D]<-(weights) v [BiasAdd]<-(bias) v [Relu] v [MaxPool] v [MatMul]<-(weights) v [BiasAdd]<-(bias) v This produces fairly good quality results, but can involve a large number of weight parameters and computations. For a cheaper alternative from the same paper with slightly less accuracy, see 'low_latency_conv' below. During training, dropout nodes are introduced after each relu, controlled by a placeholder. Args: fingerprint_input: TensorFlow node that will output audio feature vectors. model_settings: Dictionary of information about the model. is_training: Whether the model is going to be used for training. Returns: TensorFlow node outputting logits results, and optionally a dropout placeholder.
461459create_low_latency_conv_modeltensorflow/tensorflow/examples/speech_commands/models.py333functionBuilds a convolutional model with low compute requirements. This is roughly the network labeled as 'cnn-one-fstride4' in the 'Convolutional Neural Networks for Small-footprint Keyword Spotting' paper: http://www.isca-speech.org/archive/interspeech_2015/papers/i15_1478.pdf Here's the layout of the graph: (fingerprint_input) v [Conv2D]<-(weights) v [BiasAdd]<-(bias) v [Relu] v [MatMul]<-(weights) v [BiasAdd]<-(bias) v [MatMul]<-(weights) v [BiasAdd]<-(bias) v [MatMul]<-(weights) v [BiasAdd]<-(bias) v This produces slightly lower quality results than the 'conv' model, but needs fewer weight parameters and computations. During training, dropout nodes are introduced after the relu, controlled by a placeholder. Args: fingerprint_input: TensorFlow node that will output audio feature vectors. model_settings: Dictionary of information about the model. is_training: Whether the model is going to be used for training. Returns: TensorFlow node outputting logits results, and optionally a dropout placeholder.
462460create_low_latency_svdf_modeltensorflow/tensorflow/examples/speech_commands/models.py462functionBuilds an SVDF model with low compute requirements. This is based in the topology presented in the 'Compressing Deep Neural Networks using a Rank-Constrained Topology' paper: https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43813.pdf Here's the layout of the graph: (fingerprint_input) v [SVDF]<-(weights) v [BiasAdd]<-(bias) v [Relu] v [MatMul]<-(weights) v [BiasAdd]<-(bias) v [MatMul]<-(weights) v [BiasAdd]<-(bias) v [MatMul]<-(weights) v [BiasAdd]<-(bias) v This model produces lower recognition accuracy than the 'conv' model above, but requires fewer weight parameters and, significantly fewer computations. During training, dropout nodes are introduced after the relu, controlled by a placeholder. Args: fingerprint_input: TensorFlow node that will output audio feature vectors. The node is expected to produce a 2D Tensor of shape: [batch, model_settings['fingerprint_width'] * model_settings['spectrogram_length']] with the features corresponding to the same time slot arranged contiguously, and the oldest slot at index [:, 0], and newest at [:, -1]. model_settings: Dictionary of information about the model. is_training: Whether the model is going to be used for training. runtime_settings: Dictionary of information about the runtime. Returns: TensorFlow node outputting logits results, and optionally a dropout placeholder. Raises: ValueError: If the inputs tensor is incorrectly shaped.
463461create_tiny_conv_modeltensorflow/tensorflow/examples/speech_commands/models.py673functionBuilds a convolutional model aimed at microcontrollers. Devices like DSPs and microcontrollers can have very small amounts of memory and limited processing power. This model is designed to use less than 20KB of working RAM, and fit within 32KB of read-only (flash) memory. Here's the layout of the graph: (fingerprint_input) v [Conv2D]<-(weights) v [BiasAdd]<-(bias) v [Relu] v [MatMul]<-(weights) v [BiasAdd]<-(bias) v This doesn't produce particularly accurate results, but it's designed to be used as the first stage of a pipeline, running on a low-energy piece of hardware that can always be on, and then wake higher-power chips when a possible utterance has been found, so that more accurate analysis can be done. During training, a dropout node is introduced after the relu, controlled by a placeholder. Args: fingerprint_input: TensorFlow node that will output audio feature vectors. model_settings: Dictionary of information about the model. is_training: Whether the model is going to be used for training. Returns: TensorFlow node outputting logits results, and optionally a dropout placeholder.
464462create_tiny_embedding_conv_modeltensorflow/tensorflow/examples/speech_commands/models.py765functionBuilds a convolutional model aimed at microcontrollers. Devices like DSPs and microcontrollers can have very small amounts of memory and limited processing power. This model is designed to use less than 20KB of working RAM, and fit within 32KB of read-only (flash) memory. Here's the layout of the graph: (fingerprint_input) v [Conv2D]<-(weights) v [BiasAdd]<-(bias) v [Relu] v [Conv2D]<-(weights) v [BiasAdd]<-(bias) v [Relu] v [Conv2D]<-(weights) v [BiasAdd]<-(bias) v [Relu] v [MatMul]<-(weights) v [BiasAdd]<-(bias) v This doesn't produce particularly accurate results, but it's designed to be used as the first stage of a pipeline, running on a low-energy piece of hardware that can always be on, and then wake higher-power chips when a possible utterance has been found, so that more accurate analysis can be done. During training, a dropout node is introduced after the relu, controlled by a placeholder. Args: fingerprint_input: TensorFlow node that will output audio feature vectors. model_settings: Dictionary of information about the model. is_training: Whether the model is going to be used for training. Returns: TensorFlow node outputting logits results, and optionally a dropout placeholder.
465463ModelsTesttensorflow/tensorflow/examples/speech_commands/models_test.py28class
466464RecognizeResulttensorflow/tensorflow/examples/speech_commands/recognize_commands.py25classSave recognition result temporarily. Attributes: founded_command: A string indicating the word just founded. Default value is '_silence_' score: An float representing the confidence of founded word. Default value is zero. is_new_command: A boolean indicating if the founded command is a new one against the last one. Default value is False.
467465RecognizeCommandstensorflow/tensorflow/examples/speech_commands/recognize_commands.py67classSmooth the inference results by using average window. Maintain a slide window over the audio stream, which adds new result(a pair of the 1.confidences of all classes and 2.the start timestamp of input audio clip) directly the inference produces one and removes the most previous one and other abnormal values. Then it smooth the results in the window to get the most reliable command in this period. Attributes: _label: A list containing commands at corresponding lines. _average_window_duration: The length of average window. _detection_threshold: A confidence threshold for filtering out unreliable command. _suppression_ms: Milliseconds every two reliable founded commands should apart. _minimum_count: An integer count indicating the minimum results the average window should cover. _previous_results: A deque to store previous results. _label_count: The length of label list. _previous_top_label: Last founded command. Initial value is '_silence_'. _previous_top_time: The timestamp of _previous results. Default is -np.inf.
468466load_graphtensorflow/tensorflow/examples/speech_commands/test_streaming_accuracy.py80functionRead a tensorflow model, and creates a default graph object.
469467read_label_filetensorflow/tensorflow/examples/speech_commands/test_streaming_accuracy.py92functionLoad a list of label.
470468read_wav_filetensorflow/tensorflow/examples/speech_commands/test_streaming_accuracy.py101functionLoad a wav file and return sample_rate and numpy data of float64 type.
471469maintensorflow/tensorflow/examples/speech_commands/test_streaming_accuracy.py111function
472470maintensorflow/tensorflow/examples/speech_commands/train.py88function
473471verbosity_argtensorflow/tensorflow/examples/speech_commands/train.py480functionParses verbosity argument. Args: value: A member of tf.logging. Raises: ArgumentTypeError: Not an expected value.
474472requires_contribtensorflow/tensorflow/examples/speech_commands/train_test.py32function
475473DictStructtensorflow/tensorflow/examples/speech_commands/train_test.py44class
476474TrainTesttensorflow/tensorflow/examples/speech_commands/train_test.py50class
477475wav_to_featurestensorflow/tensorflow/examples/speech_commands/wav_to_features.py47functionConverts an audio file into its corresponding feature map. Args: sample_rate: Expected sample rate of the wavs. clip_duration_ms: Expected duration in milliseconds of the wavs. window_size_ms: How long each spectrogram timeslice is. window_stride_ms: How far to move in time between spectrogram timeslices. feature_bin_count: How many bins to use for the feature fingerprint. quantize: Whether to train the model for eight-bit deployment. preprocess: Spectrogram processing mode; "mfcc", "average" or "micro". input_wav: Path to the audio WAV file to read. output_c_file: Where to save the generated C source file.
478476maintensorflow/tensorflow/examples/speech_commands/wav_to_features.py125function
479477WavToFeaturesTesttensorflow/tensorflow/examples/speech_commands/wav_to_features_test.py30class
480478create_modeltensorflow/tensorflow/examples/tf2_showcase/mnist.py69functionModel to recognize digits in the MNIST dataset. Network structure is equivalent to: https://github.com/tensorflow/tensorflow/blob/r1.5/tensorflow/examples/tutorials/mnist/mnist_deep.py and https://github.com/tensorflow/models/blob/master/tutorials/image/mnist/convolutional.py But uses the tf.keras API. Returns: A tf.keras.Model.
481479mnist_datasetstensorflow/tensorflow/examples/tf2_showcase/mnist.py115function
482480losstensorflow/tensorflow/examples/tf2_showcase/mnist.py125function
483481compute_accuracytensorflow/tensorflow/examples/tf2_showcase/mnist.py131function
484482traintensorflow/tensorflow/examples/tf2_showcase/mnist.py140functionTrains model on `dataset` using `optimizer`.
485483testtensorflow/tensorflow/examples/tf2_showcase/mnist.py166functionPerform an evaluation of `model` on the examples from `dataset`.
486484train_and_exporttensorflow/tensorflow/examples/tf2_showcase/mnist.py184functionRun MNIST training and eval loop in eager mode. Args: flags_obj: An object containing parsed flag values.
487485import_and_evaltensorflow/tensorflow/examples/tf2_showcase/mnist.py237function
488486apply_cleantensorflow/tensorflow/examples/tf2_showcase/mnist.py247function
489487maintensorflow/tensorflow/examples/tf2_showcase/mnist.py254function
490488placeholder_inputstensorflow/tensorflow/examples/tutorials/mnist/fully_connected_feed.py37functionGenerate placeholder variables to represent the input tensors. These placeholders are used as inputs by the rest of the model building code and will be fed from the downloaded data in the .run() loop, below. Args: batch_size: The batch size will be baked into both placeholders. Returns: images_placeholder: Images placeholder. labels_placeholder: Labels placeholder.
491489fill_feed_dicttensorflow/tensorflow/examples/tutorials/mnist/fully_connected_feed.py59functionFills the feed_dict for training the given step. A feed_dict takes the form of: feed_dict = { <placeholder>: <tensor of values to be passed for placeholder>, .... } Args: data_set: The set of images and labels, from input_data.read_data_sets() images_pl: The images placeholder, from placeholder_inputs(). labels_pl: The labels placeholder, from placeholder_inputs(). Returns: feed_dict: The feed dictionary mapping from placeholders to values.
492490do_evaltensorflow/tensorflow/examples/tutorials/mnist/fully_connected_feed.py87functionRuns one evaluation against the full epoch of data. Args: sess: The session in which the model has been trained. eval_correct: The Tensor that returns the number of correct predictions. images_placeholder: The images placeholder. labels_placeholder: The labels placeholder. data_set: The set of images and labels to evaluate, from input_data.read_data_sets().
493491run_trainingtensorflow/tensorflow/examples/tutorials/mnist/fully_connected_feed.py116functionTrain MNIST for a number of steps.
494492maintensorflow/tensorflow/examples/tutorials/mnist/fully_connected_feed.py218function
495493_read32tensorflow/tensorflow/examples/tutorials/mnist/input_data.py43function
496494_extract_imagestensorflow/tensorflow/examples/tutorials/mnist/input_data.py49functionExtract the images into a 4D uint8 numpy array [index, y, x, depth]. Args: f: A file object that can be passed into a gzip reader. Returns: data: A 4D uint8 numpy array [index, y, x, depth]. Raises: ValueError: If the bytestream does not start with 2051.
497495_dense_to_one_hottensorflow/tensorflow/examples/tutorials/mnist/input_data.py78functionConvert class labels from scalars to one-hot vectors.
498496_extract_labelstensorflow/tensorflow/examples/tutorials/mnist/input_data.py88functionExtract the labels into a 1D uint8 numpy array [index]. Args: f: A file object that can be passed into a gzip reader. one_hot: Does one hot encoding for the result. num_classes: Number of classes for the one hot encoding. Returns: labels: a 1D uint8 numpy array. Raises: ValueError: If the bystream doesn't start with 2049.
499497_DataSettensorflow/tensorflow/examples/tutorials/mnist/input_data.py116classContainer class for a _DataSet (deprecated). THIS CLASS IS DEPRECATED.
500498_maybe_downloadtensorflow/tensorflow/examples/tutorials/mnist/input_data.py242functionDownload the data from source url, unless it's already here. Args: filename: string, name of the file in the directory. work_directory: string, path to working directory. source_url: url to download from if file doesn't exist. Returns: Path to resulting file.
501499read_data_setstensorflow/tensorflow/examples/tutorials/mnist/input_data.py266function
502500inferencetensorflow/tensorflow/examples/tutorials/mnist/mnist.py45functionBuild the MNIST model up to where it may be used for inference. Args: images: Images placeholder, from inputs(). hidden1_units: Size of the first hidden layer. hidden2_units: Size of the second hidden layer. Returns: softmax_linear: Output tensor with the computed logits.
503501losstensorflow/tensorflow/examples/tutorials/mnist/mnist.py86functionCalculates the loss from the logits and the labels. Args: logits: Logits tensor, float - [batch_size, NUM_CLASSES]. labels: Labels tensor, int32 - [batch_size]. Returns: loss: Loss tensor of type float.
504502trainingtensorflow/tensorflow/examples/tutorials/mnist/mnist.py101functionSets up the training Ops. Creates a summarizer to track the loss over time in TensorBoard. Creates an optimizer and applies the gradients to all trainable variables. The Op returned by this function is what must be passed to the `sess.run()` call to cause the model to train. Args: loss: Loss tensor, from loss(). learning_rate: The learning rate to use for gradient descent. Returns: train_op: The Op for training.
505503evaluationtensorflow/tensorflow/examples/tutorials/mnist/mnist.py130functionEvaluate the quality of the logits at predicting the label. Args: logits: Logits tensor, float - [batch_size, NUM_CLASSES]. labels: Labels tensor, int32 - [batch_size], with values in the range [0, NUM_CLASSES). Returns: A scalar int32 tensor with the number of examples (out of batch_size) that were predicted correctly.
506504maintensorflow/tensorflow/examples/tutorials/mnist/mnist_softmax_xla.py34function
507505traintensorflow/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py38function
508506maintensorflow/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py185function
509507_hash_filetensorflow/tensorflow/examples/tutorials/word2vec/word2vec_basic.py41function
510508word2vec_basictensorflow/tensorflow/examples/tutorials/word2vec/word2vec_basic.py49functionExample of building, training and visualizing a word2vec model.
511509maintensorflow/tensorflow/examples/tutorials/word2vec/word2vec_basic.py360function
512510suppress_exceptiontensorflow/tensorflow/lite/examples/experimental_new_converter/stack_trace_example.py37function
513511TestModuletensorflow/tensorflow/lite/examples/experimental_new_converter/stack_trace_example.py46classThe test model has unsupported op.
514512test_from_saved_modeltensorflow/tensorflow/lite/examples/experimental_new_converter/stack_trace_example.py57functiondisplaying stack trace when converting saved model.
515513test_from_concrete_functiontensorflow/tensorflow/lite/examples/experimental_new_converter/stack_trace_example.py71functiondisplaying stack trace when converting concrete function.
516514maintensorflow/tensorflow/lite/examples/experimental_new_converter/stack_trace_example.py83function
517515load_labelstensorflow/tensorflow/lite/examples/python/label_image.py29function
518516_convert_bytes_to_cc_sourcetensorflow/tensorflow/lite/experimental/acceleration/compatibility/convert_binary_to_cc_source.py35functionReturns strings representing a C++ constant array containing `data`. Args: data: Byte array that will be converted into a C++ constant. array_name: String to use as the variable name for the constant array. max_line_width: The longest line length, for formatting purposes. include_guard: Name to use for the include guard macro definition. include_path: Optional path to include in the source file. use_tensorflow_license: Whether to include the standard TensorFlow Apache2 license in the generated files. Returns: Text that can be compiled as a C++ source file to link in the data as a literal array of values. Text that can be used as a C++ header file to reference the literal array.
519517maintensorflow/tensorflow/lite/experimental/acceleration/compatibility/convert_binary_to_cc_source.py155function
520518BidirectionalSequenceLstmTesttensorflow/tensorflow/lite/experimental/examples/lstm/bidirectional_sequence_lstm_test.py36class
521519BidirectionalSequenceRnnTesttensorflow/tensorflow/lite/experimental/examples/lstm/bidirectional_sequence_rnn_test.py38class
522520dynamic_rnntensorflow/tensorflow/lite/experimental/examples/lstm/rnn.py42functionCreates a recurrent neural network specified by RNNCell `cell`. Performs fully dynamic unrolling of `inputs`. Example: ```python # create a BasicRNNCell rnn_cell = tf.compat.v1.nn.rnn_cell.BasicRNNCell(hidden_size) # 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size] # defining initial state initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32) # 'state' is a tensor of shape [batch_size, cell_state_size] outputs, state = tf.compat.v1.nn.dynamic_rnn(rnn_cell, input_data, initial_state=initial_state, dtype=tf.float32) ``` ```python # create 2 LSTMCells rnn_layers = [tf.compat.v1.nn.rnn_cell.LSTMCell(size) for size in [128, 256]] # create a RNN cell composed sequentially of a number of RNNCells multi_rnn_cell = tf.compat.v1.nn.rnn_cell.MultiRNNCell(rnn_layers) # 'outputs' is a tensor of shape [batch_size, max_time, 256] # 'state' is a N-tuple where N is the number of LSTMCells containing a # tf.nn.rnn_cell.LSTMStateTuple for each cell outputs, state = tf.compat.v1.nn.dynamic_rnn(cell=multi_rnn_cell, inputs=data, dtype=tf.float32) ``` Args: cell: An instance of RNNCell. inputs: The RNN inputs. If `time_major == False` (default), this must be a `Tensor` of shape: `[batch_size, max_time, ...]`, or a nested tuple of such elements. If `time_major == True`, this must be a `Tensor` of shape: `[max_time, batch_size, ...]`, or a nested tuple of such elements. This may also be a (possibly nested) tuple of Tensors satisfying this property. The first two dimensions must match across all the inputs, but otherwise the ranks and other shape components may differ. In this case, input to `cell` at each time-step will replicate the structure of these tuples, except for the time dimension (from which the time is taken). The input to `cell` at each time step will be a `Tensor` or (possibly nested) tuple of Tensors each with dimensions `[batch_size, ...]`. sequence_length: (optional) An int32/int64 vector sized `[batch_size]`. Used to copy-through state and zero-out outputs when past a batch element's sequence length. So it's more for performance than correctness. initial_state: (optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`. dtype: (optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype. parallel_iterations: (Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer. swap_memory: Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty. time_major: The shape format of the `inputs` and `outputs` Tensors. If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using `time_major = True` is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form. scope: VariableScope for the created subgraph; defaults to "rnn". Returns: A pair (outputs, state) where: outputs: The RNN output `Tensor`. If time_major == False (default), this will be a `Tensor` shaped: `[batch_size, max_time, cell.output_size]`. If time_major == True, this will be a `Tensor` shaped: `[max_time, batch_size, cell.output_size]`. Note, if `cell.output_size` is a (possibly nested) tuple of integers or `TensorShape` objects, then `outputs` will be a tuple having the same structure as `cell.output_size`, containing Tensors having shapes corresponding to the shape data in `cell.output_size`. state: The final state. If `cell.state_size` is an int, this will be shaped `[batch_size, cell.state_size]`. If it is a `TensorShape`, this will be shaped `[batch_size] + cell.state_size`. If it is a (possibly nested) tuple of ints or `TensorShape`, this will be a tuple having the corresponding shapes. If cells are `LSTMCells` `state` will be a tuple containing a `LSTMStateTuple` for each cell. Raises: TypeError: If `cell` is not an instance of RNNCell. ValueError: If inputs is None or an empty list. RuntimeError: If not using control flow v2.
523521bidirectional_dynamic_rnntensorflow/tensorflow/lite/experimental/examples/lstm/rnn.py279functionCreates a dynamic version of bidirectional recurrent neural network. Takes input and builds independent forward and backward RNNs. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given. Args: cell_fw: An instance of RNNCell, to be used for forward direction. cell_bw: An instance of RNNCell, to be used for backward direction. inputs: The RNN inputs. If time_major == False (default), this must be a tensor of shape: `[batch_size, max_time, ...]`, or a nested tuple of such elements. If time_major == True, this must be a tensor of shape: `[max_time, batch_size, ...]`, or a nested tuple of such elements. sequence_length: (optional) An int32/int64 vector, size `[batch_size]`, containing the actual lengths for each of the sequences in the batch. If not provided, all batch entries are assumed to be full sequences; and time reversal is applied from time `0` to `max_time` for each sequence. initial_state_fw: (optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape `[batch_size, cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell_fw.state_size`. initial_state_bw: (optional) Same as for `initial_state_fw`, but using the corresponding properties of `cell_bw`. dtype: (optional) The data type for the initial states and expected output. Required if initial_states are not provided or RNN states have a heterogeneous dtype. parallel_iterations: (Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer. swap_memory: Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty. time_major: The shape format of the `inputs` and `outputs` Tensors. If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using `time_major = True` is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form. scope: VariableScope for the created subgraph; defaults to "bidirectional_rnn" Returns: A tuple (outputs, output_states) where: outputs: A tuple (output_fw, output_bw) containing the forward and the backward rnn output `Tensor`. If time_major == False (default), output_fw will be a `Tensor` shaped: `[batch_size, max_time, cell_fw.output_size]` and output_bw will be a `Tensor` shaped: `[batch_size, max_time, cell_bw.output_size]`. If time_major == True, output_fw will be a `Tensor` shaped: `[max_time, batch_size, cell_fw.output_size]` and output_bw will be a `Tensor` shaped: `[max_time, batch_size, cell_bw.output_size]`. It returns a tuple instead of a single concatenated `Tensor`, unlike in the `bidirectional_rnn`. If the concatenated one is preferred, the forward and backward outputs can be concatenated as `tf.concat(outputs, 2)`. output_states: A tuple (output_state_fw, output_state_bw) containing the forward and the backward final states of bidirectional rnn. Raises: TypeError: If `cell_fw` or `cell_bw` is not an instance of `RNNCell`.
524522TfLiteRNNCelltensorflow/tensorflow/lite/experimental/examples/lstm/rnn_cell.py39classThe most basic RNN cell. This is used only for TfLite, it provides hints and it also makes the variables in the desired for the tflite ops.
525523TFLiteLSTMCelltensorflow/tensorflow/lite/experimental/examples/lstm/rnn_cell.py162classLong short-term memory unit (LSTM) recurrent network cell. This is used only for TfLite, it provides hints and it also makes the variables in the desired for the tflite ops (transposed and separated). The default non-peephole implementation is based on: https://pdfs.semanticscholar.org/1154/0131eae85b2e11d53df7f1360eeb6476e7f4.pdf Felix Gers, Jurgen Schmidhuber, and Fred Cummins. "Learning to forget: Continual prediction with LSTM." IET, 850-855, 1999. The peephole implementation is based on: https://research.google.com/pubs/archive/43905.pdf Hasim Sak, Andrew Senior, and Francoise Beaufays. "Long short-term memory recurrent neural network architectures for large scale acoustic modeling." INTERSPEECH, 2014. The class uses optional peep-hole connections, optional cell clipping, and an optional projection layer. Note that this cell is not optimized for performance. Please use `tf.contrib.cudnn_rnn.CudnnLSTM` for better performance on GPU, or `tf.contrib.rnn.LSTMBlockCell` and `tf.contrib.rnn.LSTMBlockFusedCell` for better performance on CPU.
526524UnidirectionalSequenceLstmTesttensorflow/tensorflow/lite/experimental/examples/lstm/unidirectional_sequence_lstm_test.py36class
527525UnidirectionalSequenceRnnTesttensorflow/tensorflow/lite/experimental/examples/lstm/unidirectional_sequence_rnn_test.py37class
528526AudioFeatureGenerationTesttensorflow/tensorflow/lite/experimental/microfrontend/python/kernel_tests/audio_microfrontend_op_test.py35class
529527audio_microfrontendtensorflow/tensorflow/lite/experimental/microfrontend/python/ops/audio_microfrontend_op.py34functionAudio Microfrontend Op. This Op converts a sequence of audio data into one or more feature vectors containing filterbanks of the input. The conversion process uses a lightweight library to perform: 1. A slicing window function 2. Short-time FFTs 3. Filterbank calculations 4. Noise reduction 5. PCAN Auto Gain Control 6. Logarithmic scaling Args: audio: 1D Tensor, int16 audio data in temporal ordering. sample_rate: Integer, the sample rate of the audio in Hz. window_size: Integer, length of desired time frames in ms. window_step: Integer, length of step size for the next frame in ms. num_channels: Integer, the number of filterbank channels to use. upper_band_limit: Float, the highest frequency included in the filterbanks. lower_band_limit: Float, the lowest frequency included in the filterbanks. smoothing_bits: Int, scale up signal by 2^(smoothing_bits) before reduction. even_smoothing: Float, smoothing coefficient for even-numbered channels. odd_smoothing: Float, smoothing coefficient for odd-numbered channels. min_signal_remaining: Float, fraction of signal to preserve in smoothing. enable_pcan: Bool, enable PCAN auto gain control. pcan_strength: Float, gain normalization exponent. pcan_offset: Float, positive value added in the normalization denominator. gain_bits: Int, number of fractional bits in the gain. enable_log: Bool, enable logarithmic scaling of filterbanks. scale_shift: Integer, scale filterbanks by 2^(scale_shift). left_context: Integer, number of preceding frames to attach to each frame. right_context: Integer, number of preceding frames to attach to each frame. frame_stride: Integer, M frames to skip over, where output[n] = frame[n*M]. zero_padding: Bool, if left/right context is out-of-bounds, attach frame of zeroes. Otherwise, frame[0] or frame[size-1] will be copied. out_scale: Integer, divide all filterbanks by this number. out_type: DType, type of the output Tensor, defaults to UINT16. Returns: filterbanks: 2D Tensor, each row is a time frame, each column is a channel. Raises: ValueError: If the audio tensor is not explicitly a vector.
530528SupportedOptensorflow/tensorflow/lite/experimental/tensorboard/ops_util.py26classSpec of supported ops. Args: op: string of op name.
531529get_potentially_supported_opstensorflow/tensorflow/lite/experimental/tensorboard/ops_util.py35functionReturns operations potentially supported by TensorFlow Lite. The potentially support list contains a list of ops that are partially or fully supported, which is derived by simply scanning op names to check whether they can be handled without real conversion and specific parameters. Given that some ops may be partially supported, the optimal way to determine if a model's operations are supported is by converting using the TensorFlow Lite converter. Returns: A list of SupportedOp.
532530OpsUtilTesttensorflow/tensorflow/lite/experimental/tensorboard/ops_util_test.py24class
533531maintensorflow/tensorflow/lite/g3doc/tools/build_java_api_docs.py53function
534532maintensorflow/tensorflow/lite/g3doc/tools/build_py_api_docs.py55function
535533time_wrappingtensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_augmentation.py29functionGenerate (molecule/denominator)x speed data.
536534augment_datatensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_augmentation.py43functionPerform data augmentation.
537535TestAugmentationtensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_augmentation_test.py32class
538536DataLoadertensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_load.py35classLoads data and prepares for training.
539537TestLoadtensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_load_test.py30class
540538prepare_original_datatensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_prepare.py46functionRead collected data from files.
541539generate_negative_datatensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_prepare.py86functionGenerate negative data labeled as 'negative6~8'.
542540write_datatensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_prepare.py143function
543541TestPreparetensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_prepare_test.py32class
544542read_datatensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_split.py40function
545543split_datatensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_split.py51functionSplits data into train, validation and test according to ratio.
546544person_splittensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_split_person.py41functionSplit data by person.
547545TestSplitPersontensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_split_person_test.py28class
548546TestSplittensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_split_test.py29class
549547reshape_functiontensorflow/tensorflow/lite/micro/examples/magic_wand/train/train.py37function
550548calculate_model_sizetensorflow/tensorflow/lite/micro/examples/magic_wand/train/train.py42function
551549build_cnntensorflow/tensorflow/lite/micro/examples/magic_wand/train/train.py51functionBuilds a convolutional neural network in Keras.
552550build_lstmtensorflow/tensorflow/lite/micro/examples/magic_wand/train/train.py78functionBuilds an LSTM in Keras.
553551load_datatensorflow/tensorflow/lite/micro/examples/magic_wand/train/train.py93function
554552build_nettensorflow/tensorflow/lite/micro/examples/magic_wand/train/train.py101function
555553train_nettensorflow/tensorflow/lite/micro/examples/magic_wand/train/train.py111functionTrains the model.
556554TestTraintensorflow/tensorflow/lite/micro/examples/magic_wand/train/train_test.py33class
557555to_cctensorflow/tensorflow/lite/micro/examples/micro_speech/CMSIS/create_constants.py26functionWrites table values to a C++ source file.
558556to_htensorflow/tensorflow/lite/micro/examples/micro_speech/CMSIS/create_constants.py44functionWrites a header file for the table values.
559557new_data_to_arraytensorflow/tensorflow/lite/micro/examples/micro_speech/apollo3/captured_data_to_wav.py28function
560558new_data_to_arraytensorflow/tensorflow/lite/micro/examples/micro_speech/apollo3/compare_1k.py29functionConverts file information to an in-memory array.
561559to_floattensorflow/tensorflow/lite/micro/examples/micro_speech/apollo3/compare_1k.py63function
562560check_file_existencetensorflow/tensorflow/lite/micro/examples/person_detection/utils/raw_to_bitmap.py52function
563561show_and_save_bitmapstensorflow/tensorflow/lite/micro/examples/person_detection/utils/raw_to_bitmap.py60functionDisplay and save a list of bitmaps. Args: input_file: input file name bitmap_list: list of numpy arrays to represent bitmap images channels: color channel count
564562reshape_bitmapstensorflow/tensorflow/lite/micro/examples/person_detection/utils/raw_to_bitmap.py87functionReshape flat integer arrays. Args: frame_list: list of 1-D arrays to represent raw image data width: image width in pixels height: image height in pixels channels: color channel count Returns: list of numpy arrays to represent bitmap images
565563parse_filetensorflow/tensorflow/lite/micro/examples/person_detection/utils/raw_to_bitmap.py109functionConvert log file to array of pixels. Args: inputfile: log file to parse width: image width in pixels height: image height in pixels channels: color channel count Returns: list 1-D arrays to represent raw image data.
566564maintensorflow/tensorflow/lite/micro/examples/person_detection/utils/raw_to_bitmap.py159function
567565RawToBitmapTesttensorflow/tensorflow/lite/micro/examples/person_detection/utils/raw_to_bitmap_test.py94class
568566generate_conv_modeltensorflow/tensorflow/lite/micro/testing/generate_test_models.py34functionCreates a basic Keras model and converts to tflite. This model does not make any relevant classifications. It only exists to generate a model that is designed to run on embedded devices.
569567maintensorflow/tensorflow/lite/micro/testing/generate_test_models.py74function
570568rename_example_subfolder_filestensorflow/tensorflow/lite/micro/tools/make/fix_arduino_subfolders.py29functionMoves source files in example subfolders to equivalents at root.
571569move_person_datatensorflow/tensorflow/lite/micro/tools/make/fix_arduino_subfolders.py41functionMoves the downloaded person model into the examples folder.
572570move_person_data_experimentaltensorflow/tensorflow/lite/micro/tools/make/fix_arduino_subfolders.py61functionMoves the downloaded person model into the examples folder.
573571move_image_data_experimentaltensorflow/tensorflow/lite/micro/tools/make/fix_arduino_subfolders.py83functionMoves the downloaded image detection model into the examples folder.
574572rename_example_main_inostensorflow/tensorflow/lite/micro/tools/make/fix_arduino_subfolders.py104functionMakes sure the .ino sketch files match the example name.
575573maintensorflow/tensorflow/lite/micro/tools/make/fix_arduino_subfolders.py114functionControl the rewriting of source files.
576574parse_argstensorflow/tensorflow/lite/micro/tools/make/fix_arduino_subfolders.py124functionConverts the raw arguments into accessible flags.
577575sanitize_xmltensorflow/tensorflow/lite/micro/tools/make/generate_keil_project.py29functionUses a allowlist to avoid generating bad XML.
578576maintensorflow/tensorflow/lite/micro/tools/make/generate_keil_project.py34functionGenerates a Keil project file from a template source.
579577parse_argstensorflow/tensorflow/lite/micro/tools/make/generate_keil_project.py82functionConverts the raw arguments into accessible flags.
580578maintensorflow/tensorflow/lite/micro/tools/make/merge_arduino_zips.py27functionMerges multiple Arduino zipfiles into a single result.
581579parse_argstensorflow/tensorflow/lite/micro/tools/make/merge_arduino_zips.py39functionConverts the raw arguments into accessible flags.
582580replace_includestensorflow/tensorflow/lite/micro/tools/make/transform_arduino_source.py29functionUpdates any includes to reference the new Arduino library paths.
583581replace_maintensorflow/tensorflow/lite/micro/tools/make/transform_arduino_source.py43functionUpdates any occurrences of a bare main definition to the Arduino equivalent.
584582check_ino_functionstensorflow/tensorflow/lite/micro/tools/make/transform_arduino_source.py51functionEnsures the required functions exist.
585583add_example_ino_library_includetensorflow/tensorflow/lite/micro/tools/make/transform_arduino_source.py65functionMakes sure the example includes the header that loads the library.
586584replace_example_includestensorflow/tensorflow/lite/micro/tools/make/transform_arduino_source.py71functionUpdates any includes for local example files.
587585maintensorflow/tensorflow/lite/micro/tools/make/transform_arduino_source.py85functionTransforms the input source file to work when exported to Arduino.
588586parse_argstensorflow/tensorflow/lite/micro/tools/make/transform_arduino_source.py108functionConverts the raw arguments into accessible flags.
589587replace_arduino_includestensorflow/tensorflow/lite/micro/tools/make/transform_source.py36functionUpdates any includes to reference the new Arduino library paths.
590588replace_arduino_maintensorflow/tensorflow/lite/micro/tools/make/transform_source.py50functionUpdates any occurrences of a bare main definition to the Arduino equivalent.
591589check_ino_functionstensorflow/tensorflow/lite/micro/tools/make/transform_source.py58functionEnsures the required functions exist.
592590add_example_ino_library_includetensorflow/tensorflow/lite/micro/tools/make/transform_source.py72functionMakes sure the example includes the header that loads the library.
593591replace_arduino_example_includestensorflow/tensorflow/lite/micro/tools/make/transform_source.py78functionUpdates any includes for local example files.
594592replace_esp_example_includestensorflow/tensorflow/lite/micro/tools/make/transform_source.py92functionUpdates any includes for local example files.
595593transform_arduino_sourcestensorflow/tensorflow/lite/micro/tools/make/transform_source.py109functionTransform sources for the Arduino platform. Args: input_lines: A sequence of lines from the input file to process. flags: Flags indicating which transformation(s) to apply. Returns: The transformed output as a string.
596594transform_esp_sourcestensorflow/tensorflow/lite/micro/tools/make/transform_source.py138functionTransform sources for the ESP-IDF platform. Args: input_lines: A sequence of lines from the input file to process. flags: Flags indicating which transformation(s) to apply. Returns: The transformed output as a string.
597595maintensorflow/tensorflow/lite/micro/tools/make/transform_source.py158functionTransforms the input source file to work when exported as example.
598596parse_argstensorflow/tensorflow/lite/micro/tools/make/transform_source.py171functionConverts the raw arguments into accessible flags.
599597_requires_input_statstensorflow/tensorflow/lite/python/convert.py49function
600598_try_convert_to_unicodetensorflow/tensorflow/lite/python/convert.py67function
601599OpsSettensorflow/tensorflow/lite/python/convert.py80classEnum class defining the sets of ops available to generate TFLite models. WARNING: Experimental interface, subject to change.
602600ConverterErrortensorflow/tensorflow/lite/python/convert.py120classRaised when an error occurs during model conversion.
603601mlir_quantizetensorflow/tensorflow/lite/python/convert.py125functionQuantize `input_data_str` with calibration results. Args: input_data_str: Input data in serialized form (e.g. a TFLITE model with calibration results). disable_per_channel: Bool indicating whether to do per-channel or per-tensor quantization fully_quantize: Bool indicating whether to fully quantize the model. Besides model body, the input/output will be quantized as well. inference_type: Data type for the activations. The default value is int8. Returns: Quantized model in serialized form (e.g. a TFLITE model) with floating-point inputs and outputs.
604602mlir_sparsifytensorflow/tensorflow/lite/python/convert.py150functionSparsify `input_data_str` to encode sparse tensor with proper format. Args: input_data_str: Input data in serialized form (e.g. a TFLITE model). Returns: Sparsified model in serialized form (e.g. a TFLITE model).
605603toco_convert_protostensorflow/tensorflow/lite/python/convert.py162functionConvert `input_data_str` according to model and toco parameters. Unless you know what you are doing consider using the more friendly `tf.compat.v1.lite.toco_convert`. Args: model_flags_str: Serialized proto describing model properties, see `toco/model_flags.proto`. toco_flags_str: Serialized proto describing conversion properties, see `toco/toco_flags.proto`. input_data_str: Input data in serialized form (e.g. a graphdef is common) debug_info_str: Serialized `GraphDebugInfo` proto describing logging information. (default None) enable_mlir_converter: Enables MLIR-based conversion instead of the default TOCO conversion. (default False) Returns: Converted model in serialized form (e.g. a TFLITE model is common). Raises: ConverterError: When conversion fails in TFLiteConverter, usually due to ops not being supported. RuntimeError: When conversion fails, an exception is raised with the error message embedded.
606604build_toco_convert_protostensorflow/tensorflow/lite/python/convert.py291functionBuilds protocol buffers describing a conversion of a model using TOCO. Typically this is to convert from TensorFlow GraphDef to TFLite, in which case the default `input_format` and `output_format` are sufficient. Args: input_tensors: List of input tensors. Type and shape are computed using `foo.shape` and `foo.dtype`. output_tensors: List of output tensors (only .name is used from this). inference_type: Target data type of real-number arrays in the output file. Must be `{tf.float32, tf.uint8, tf.int8}`. (default tf.float32) inference_input_type: Target data type of real-number input arrays. Allows for a different type for input arrays in the case of quantization. Must be `{tf.float32, tf.uint8, tf.int8}`. (default `inference_type`) input_format: Type of data to read Currently must be `{TENSORFLOW_GRAPHDEF}`. (default TENSORFLOW_GRAPHDEF) input_shapes: Input array shape. It needs to be a list of the same length as `input_tensors`, or None. (default None) output_format: Output file format. Currently must be `{TFLITE, GRAPHVIZ_DOT}`. (default TFLITE) quantized_input_stats: List of tuples of floats representing the mean and standard deviation. Each tuple maps to the corresponding input tensor. Only need if `inference_input_type` is `QUANTIZED_UINT8` or `INT8`. real_input_value = (quantized_input_value - mean_value) / std_dev_value. (default None) default_ranges_stats: Tuple of integers representing (min, max) range values for all arrays without a specified range. Intended for experimenting with quantization via "dummy quantization". (default None) drop_control_dependency: Boolean indicating whether to drop control dependencies silently. This is due to TFLite not supporting control dependencies. (default True) reorder_across_fake_quant: Boolean indicating whether to reorder FakeQuant nodes in unexpected locations. Used when the location of the FakeQuant nodes is preventing graph transformations necessary to convert the graph. Results in a graph that differs from the quantized training graph, potentially causing differing arithmetic behavior. (default False) allow_custom_ops: Boolean indicating whether to allow custom operations. When false any unknown operation is an error. When true, custom ops are created for any op that is unknown. The developer will need to provide these to the TensorFlow Lite runtime with a custom resolver. (default False) custom_opdefs: List of strings representing custom ops OpDefs that are included in the GraphDef. Required when using custom operations with the MLIR-based converter. (default None) change_concat_input_ranges: Boolean to change behavior of min/max ranges for inputs and outputs of the concat operator for quantized models. Changes the ranges of concat operator overlap when true. (default False) post_training_quantize: Boolean indicating whether to quantize the weights of the converted float model. Model size will be reduced and there will be latency improvements (at the cost of accuracy). (default False) quantize_to_float16: Boolean indicating whether to convert float buffers to float16. (default False) dump_graphviz_dir: Full filepath of folder to dump the graphs at various stages of processing GraphViz .dot files. Preferred over --output_format=GRAPHVIZ_DOT in order to keep the requirements of the output file. (default None) dump_graphviz_video: Boolean indicating whether to dump the graph after every graph transformation. (default False) target_ops: Experimental flag, subject to change. Set of OpsSet options indicating which converter to use. (default set([OpsSet.TFLITE_BUILTINS])) allow_nonexistent_arrays: Allow specifying array names that don't exist or are unused in the final graph. (default False) debug_info: `GraphDebugInfo` proto containing the stack traces for the original nodes referred by the converted graph. conversion_summary_dir: A string, the path to the generated conversion logs. saved_model_dir: Filepath of the saved model to be converted. This value will be non-empty only when the saved model import path will be used. Otherwises, the graph def-based conversion will be processed. saved_model_version: SavedModel file format version of The saved model file to be converted. This value will be set only when the SavedModel import path will be used. saved_model_tags: Set of string saved model tags, formatted in the comma-separated value. This value will be set only when the SavedModel import path will be used. saved_model_exported_names: Names to be exported (default: export all) when the saved model import path is on. This value will be set only when the SavedModel import path will be used. Returns: model_flags, toco_flags, debug_info: three protocol buffers describing the conversion process and debug information. Raises: ValueError: If the input tensor type is unknown Missing mean_values or std_dev_values RuntimeError: If TOCO fails to convert (in which case the runtime error's error text will contain the TOCO error log)
607605toco_convert_graph_deftensorflow/tensorflow/lite/python/convert.py485function"Convert a model using TOCO. This function is used to convert GraphDefs that cannot be loaded into TensorFlow to TFLite. Conversion can be customized by providing arguments that are forwarded to `build_toco_convert_protos` (see documentation for details). Args: input_data: Input data (i.e. often `sess.graph_def`), input_arrays_with_shape: Tuple of strings representing input tensor names and list of integers representing input shapes (e.g., [("foo" : [1, 16, 16, 3])]). Use only when graph cannot be loaded into TensorFlow and when `input_tensors` is None. (default None) output_arrays: List of output tensors to freeze graph with. Use only when graph cannot be loaded into TensorFlow and when `output_tensors` is None. (default None) enable_mlir_converter: Enables MLIR-based conversion instead of TOCO conversion. *args: See `build_toco_convert_protos`, **kwargs: See `build_toco_convert_protos`. Returns: The converted data. For example if TFLite was the destination, then this will be a tflite flatbuffer in a bytes array. Raises: Defined in `build_toco_convert_protos`.
608606toco_convert_impltensorflow/tensorflow/lite/python/convert.py541function"Convert a model using TOCO. Typically this function is used to convert from TensorFlow GraphDef to TFLite. Conversion can be customized by providing arguments that are forwarded to `build_toco_convert_protos` (see documentation for details). Args: input_data: Input data (i.e. often `sess.graph_def`), input_tensors: List of input tensors. Type and shape are computed using `foo.shape` and `foo.dtype`. output_tensors: List of output tensors (only .name is used from this). enable_mlir_converter: Enables MLIR-based conversion instead of TOCO conversion. *args: See `build_toco_convert_protos`, **kwargs: See `build_toco_convert_protos`. Returns: The converted data. For example if TFLite was the destination, then this will be a tflite flatbuffer in a bytes array. Raises: Defined in `build_toco_convert_protos`.
609607toco_converttensorflow/tensorflow/lite/python/convert.py580functionConvert a model using TOCO. Typically this function is used to convert from TensorFlow GraphDef to TFLite. Conversion can be customized by providing arguments that are forwarded to `build_toco_convert_protos` (see documentation for details). This function has been deprecated. Please use `lite.TFLiteConverter` instead. Args: input_data: Input data (i.e. often `sess.graph_def`), input_tensors: List of input tensors. Type and shape are computed using `foo.shape` and `foo.dtype`. output_tensors: List of output tensors (only .name is used from this). *args: See `build_toco_convert_protos`, **kwargs: See `build_toco_convert_protos`. Returns: The converted data. For example if TFLite was the destination, then this will be a tflite flatbuffer in a bytes array. Raises: Defined in `build_toco_convert_protos`.
610608run_maintensorflow/tensorflow/lite/python/convert_file_to_c_source.py29functionMain in convert_file_to_c_source.py.
611609maintensorflow/tensorflow/lite/python/convert_file_to_c_source.py101function
612610_log_tensor_detailstensorflow/tensorflow/lite/python/convert_saved_model.py30functionLog tensor details: name, shape, and type.
613611get_meta_graph_deftensorflow/tensorflow/lite/python/convert_saved_model.py46functionValidate saved_model and extract MetaGraphDef. Args: saved_model_dir: saved_model path to convert. tag_set: Set of tag(s) of the MetaGraphDef to load. Returns: The meta_graph_def used for tflite conversion. Raises: ValueError: No valid MetaGraphDef for given tag_set.
614612get_signature_deftensorflow/tensorflow/lite/python/convert_saved_model.py63functionGet the signature def from meta_graph with given signature_key. Args: meta_graph: meta_graph_def. signature_key: signature_def in the meta_graph_def. Returns: The signature_def used for tflite conversion. Raises: ValueError: Given signature_key is not valid for this meta_graph.
615613get_inputs_outputstensorflow/tensorflow/lite/python/convert_saved_model.py88functionGet inputs and outputs from SignatureDef. Args: signature_def: SignatureDef in the meta_graph_def for conversion. Returns: The inputs and outputs in the graph for conversion.
616614_get_tensorstensorflow/tensorflow/lite/python/convert_saved_model.py112functionGets the tensors associated with the tensor names. Either signature_def_tensor_names or user_tensor_names should be provided. If the user provides tensors, the tensors associated with the user provided tensor names are provided. Otherwise, the tensors associated with the names in the SignatureDef are provided. Args: graph: GraphDef representing graph. signature_def_tensor_names: Tensor names stored in either the inputs or outputs of a SignatureDef. (default None) user_tensor_names: Tensor names provided by the user. (default None) Returns: List of tensors. Raises: ValueError: signature_def_tensors and user_tensor_names are undefined or empty. user_tensor_names are not valid.
617615freeze_saved_modeltensorflow/tensorflow/lite/python/convert_saved_model.py155functionConverts a SavedModel to a frozen graph. Args: saved_model_dir: SavedModel directory to convert. input_arrays: List of input tensors to freeze graph with. Uses input arrays from SignatureDef when none are provided. input_shapes: Dict of strings representing input tensor names to list of integers representing input shapes (e.g., {"foo": : [1, 16, 16, 3]}). Automatically determined when input shapes is None (e.g., {"foo" : None}). output_arrays: List of output tensors to freeze graph with. Uses output arrays from SignatureDef when none are provided. tag_set: Set of tags identifying the MetaGraphDef within the SavedModel to analyze. All tags in the tag set must be present. signature_key: Key identifying SignatureDef containing inputs and outputs. Returns: frozen_graph_def: Frozen GraphDef. in_tensors: List of input tensors for the graph. out_tensors: List of output tensors for the graph. graph: `Graph` object. Raises: ValueError: SavedModel doesn't contain a MetaGraphDef identified by tag_set. signature_key is not in the MetaGraphDef. assets/ directory is in the MetaGraphDef. input_shapes does not match the length of input_arrays. input_arrays or output_arrays are not valid.
618616FreezeSavedModelTesttensorflow/tensorflow/lite/python/convert_saved_model_test.py40class
619617ConvertTesttensorflow/tensorflow/lite/python/convert_test.py38class
620618ConvertTestOpHinttensorflow/tensorflow/lite/python/convert_test.py168classTest the hint to stub functionality.
621619_tf_exporttensorflow/tensorflow/lite/python/interpreter.py37function
622620Delegatetensorflow/tensorflow/lite/python/interpreter.py42classPython wrapper class to manage TfLiteDelegate objects. The shared library is expected to have two functions: TfLiteDelegate* tflite_plugin_create_delegate( char**, char**, size_t, void (*report_error)(const char *)) void tflite_plugin_destroy_delegate(TfLiteDelegate*) The first one creates a delegate object. It may return NULL to indicate an error (with a suitable error message reported by calling report_error()). The second one destroys delegate object and must be called for every created delegate object. Passing NULL as argument value is allowed, i.e. tflite_plugin_destroy_delegate(tflite_plugin_create_delegate(...)) always works.
623621load_delegatetensorflow/tensorflow/lite/python/interpreter.py132functionReturns loaded Delegate object. Args: library: Name of shared library containing the [TfLiteDelegate](https://www.tensorflow.org/lite/performance/delegates). options: Dictionary of options that are required to load the delegate. All keys and values in the dictionary should be convertible to str. Consult the documentation of the specific delegate for required and legal options. (default None) Returns: Delegate object. Raises: ValueError: Delegate failed to load. RuntimeError: If delegate loading is used on unsupported platform.
624622Interpretertensorflow/tensorflow/lite/python/interpreter.py159classInterpreter interface for TensorFlow Lite Models. This makes the TensorFlow Lite interpreter accessible in Python. It is possible to use this interpreter in a multithreaded Python environment, but you must be sure to call functions of a particular instance from only one thread at a time. So if you want to have 4 threads running different inferences simultaneously, create an interpreter for each one as thread-local data. Similarly, if you are calling invoke() in one thread on a single interpreter but you want to use tensor() on another thread once it is done, you must use a synchronization primitive between the threads to ensure invoke has returned before calling tensor().
625623InterpreterWithCustomOpstensorflow/tensorflow/lite/python/interpreter.py552classInterpreter interface for TensorFlow Lite Models that accepts custom ops. The interface provided by this class is experimental and therefore not exposed as part of the public API. Wraps the tf.lite.Interpreter class and adds the ability to load custom ops by providing the names of functions that take a pointer to a BuiltinOpResolver and add a custom op.
626624InterpreterCustomOpsTesttensorflow/tensorflow/lite/python/interpreter_test.py43class
627625InterpreterTesttensorflow/tensorflow/lite/python/interpreter_test.py63class
628626InterpreterTestErrorPropagationtensorflow/tensorflow/lite/python/interpreter_test.py260class
629627InterpreterTensorAccessorTesttensorflow/tensorflow/lite/python/interpreter_test.py298class
630628InterpreterDelegateTesttensorflow/tensorflow/lite/python/interpreter_test.py353class
631629Optimizetensorflow/tensorflow/lite/python/lite.py88classEnum defining the optimizations to apply when generating tflite graphs. Some optimizations may come at the cost of accuracy. DEFAULT Default optimization strategy. Converter will do its best to improve size and latency based on the information provided. Enhanced optimizations are gained by providing a representative_dataset. This is recommended, and is currently equivalent to the modes below. Currently, weights will be quantized and if representative_dataset is provided, activations for quantizable operations will also be quantized. OPTIMIZE_FOR_SIZE Deprecated. Does the same as DEFAULT. OPTIMIZE_FOR_LATENCY Deprecated. Does the same as DEFAULT.
632630RepresentativeDatasettensorflow/tensorflow/lite/python/lite.py131classRepresentative dataset to evaluate optimizations. A representative dataset that can be used to evaluate optimizations by the converter. E.g. converter can use these examples to estimate (min, max) ranges by calibrating the model on inputs. This can allow converter to quantize a converted floating point model.
633631TargetSpectensorflow/tensorflow/lite/python/lite.py153classSpecification of target device. Details about target device. Converter optimizes the generated model for specific device. Attributes: supported_ops: Experimental flag, subject to change. Set of OpsSet options supported by the device. (default set([OpsSet.TFLITE_BUILTINS])) supported_types: List of types for constant values on the target device. Supported values are types exported by lite.constants. Frequently, an optimization choice is driven by the most compact (i.e. smallest) type in this list (default [constants.FLOAT])
634632QuantizationModetensorflow/tensorflow/lite/python/lite.py177classQuantizationMode determines the quantized conversion from user options.
635633TFLiteConverterBasetensorflow/tensorflow/lite/python/lite.py384classConverter subclass to share functionality between V1 and V2 converters.
636634TFLiteConverterBaseV2tensorflow/tensorflow/lite/python/lite.py522classConverter subclass to share functionality between V2 converters. Attributes: allow_custom_ops: Boolean indicating whether to allow custom operations. When False, any unknown operation is an error. When True, custom ops are created for any op that is unknown. The developer needs to provide these to the TensorFlow Lite runtime with a custom resolver. (default False) optimizations: Experimental flag, subject to change. A list of optimizations to apply when converting the model. E.g. `[Optimize.DEFAULT]` representative_dataset: A representative dataset that can be used to generate input and output samples for the model. The converter can use the dataset to evaluate different optimizations. Note that this is an optional attribute but it is necessary if INT8 is the only support builtin ops in target ops. target_spec: Experimental flag, subject to change. Specification of target device. inference_input_type: Data type of the input layer. Note that integer types (tf.int8 and tf.uint8) are currently only supported for post training integer quantization. (default tf.float32, must be in {tf.float32, tf.int8, tf.uint8}) inference_output_type: Data type of the output layer. Note that integer types (tf.int8 and tf.uint8) are currently only supported for post training integer quantization. (default tf.float32, must be in {tf.float32, tf.int8, tf.uint8}) experimental_new_converter: Experimental flag, subject to change. Enables MLIR-based conversion instead of TOCO conversion. (default True)
637635TFLiteSavedModelConverterV2tensorflow/tensorflow/lite/python/lite.py652classConverts the given SavedModel into TensorFlow Lite model. Attributes: saved_model_dir: Directory of the SavedModel.
638636TFLiteKerasModelConverterV2tensorflow/tensorflow/lite/python/lite.py719classConverts the given Keras model into TensorFlow Lite model.
639637TFLiteFrozenGraphConverterV2tensorflow/tensorflow/lite/python/lite.py840classConverts the given frozen graph into TensorFlow Lite model.
640638TFLiteConverterV2tensorflow/tensorflow/lite/python/lite.py910classConverts a TensorFlow model into TensorFlow Lite model. Attributes: allow_custom_ops: Boolean indicating whether to allow custom operations. When False, any unknown operation is an error. When True, custom ops are created for any op that is unknown. The developer needs to provide these to the TensorFlow Lite runtime with a custom resolver. (default False) optimizations: Experimental flag, subject to change. A list of optimizations to apply when converting the model. E.g. `[Optimize.DEFAULT]` representative_dataset: A representative dataset that can be used to generate input and output samples for the model. The converter can use the dataset to evaluate different optimizations. Note that this is an optional attribute but it is necessary if INT8 is the only support builtin ops in target ops. target_spec: Experimental flag, subject to change. Specification of target device. inference_input_type: Data type of the input layer. Note that integer types (tf.int8 and tf.uint8) are currently only supported for post training integer quantization. (default tf.float32, must be in {tf.float32, tf.int8, tf.uint8}) inference_output_type: Data type of the output layer. Note that integer types (tf.int8 and tf.uint8) are currently only supported for post training integer quantization. (default tf.float32, must be in {tf.float32, tf.int8, tf.uint8}) experimental_new_converter: Experimental flag, subject to change. Enables MLIR-based conversion instead of TOCO conversion. (default True) Example usage: ```python # Converting a SavedModel to a TensorFlow Lite model. converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir) tflite_model = converter.convert() # Converting a tf.Keras model to a TensorFlow Lite model. converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert() # Converting ConcreteFunctions to a TensorFlow Lite model. converter = tf.lite.TFLiteConverter.from_concrete_functions([func]) tflite_model = converter.convert() ```
641639TFLiteConverterBaseV1tensorflow/tensorflow/lite/python/lite.py1085classConverter subclass to share functionality between V1 converters. Attributes: inference_type: Target data type of real-number arrays in the output file. Must be `{tf.float32, tf.uint8}`. If `optimzations` are provided, this parameter is ignored. (default tf.float32) inference_input_type: Target data type of real-number input arrays. Allows for a different type for input arrays. If an integer type is provided and `optimizations` are not used, `quantized_inputs_stats` must be provided. If `inference_type` is tf.uint8, signaling conversion to a fully quantized model from a quantization-aware trained input model, then `inference_input_type` defaults to tf.uint8. In all other cases, `inference_input_type` defaults to tf.float32. Must be `{tf.float32, tf.uint8, tf.int8}` inference_output_type: Target data type of real-number output arrays. Allows for a different type for output arrays. If `inference_type` is tf.uint8, signaling conversion to a fully quantized model from a quantization-aware trained output model, then `inference_output_type` defaults to tf.uint8. In all other cases, `inference_output_type` must be tf.float32, an error will be thrown otherwise. Must be `{tf.float32, tf.uint8, tf.int8}` output_format: Output file format. Currently must be `{TFLITE, GRAPHVIZ_DOT}`. (default TFLITE) quantized_input_stats: Dict of strings representing input tensor names mapped to tuple of floats representing the mean and standard deviation of the training data (e.g., {"foo" : (0., 1.)}). Only need if `inference_input_type` is `QUANTIZED_UINT8`. real_input_value = (quantized_input_value - mean_value) / std_dev_value. (default {}) default_ranges_stats: Tuple of integers representing (min, max) range values for all arrays without a specified range. Intended for experimenting with quantization via "dummy quantization". (default None) drop_control_dependency: Boolean indicating whether to drop control dependencies silently. This is due to TFLite not supporting control dependencies. (default True) reorder_across_fake_quant: Boolean indicating whether to reorder FakeQuant nodes in unexpected locations. Used when the location of the FakeQuant nodes is preventing graph transformations necessary to convert the graph. Results in a graph that differs from the quantized training graph, potentially causing differing arithmetic behavior. (default False) change_concat_input_ranges: Boolean to change behavior of min/max ranges for inputs and outputs of the concat operator for quantized models. Changes the ranges of concat operator overlap when true. (default False) allow_custom_ops: Boolean indicating whether to allow custom operations. When false any unknown operation is an error. When true, custom ops are created for any op that is unknown. The developer will need to provide these to the TensorFlow Lite runtime with a custom resolver. (default False) post_training_quantize: Deprecated. Please specify `[Optimize.DEFAULT]` for `optimizations` instead. Boolean indicating whether to quantize the weights of the converted float model. Model size will be reduced and there will be latency improvements (at the cost of accuracy). (default False) dump_graphviz_dir: Full filepath of folder to dump the graphs at various stages of processing GraphViz .dot files. Preferred over --output_format=GRAPHVIZ_DOT in order to keep the requirements of the output file. (default None) dump_graphviz_video: Boolean indicating whether to dump the graph after every graph transformation. (default False) conversion_summary_dir: A string indicating the path to the generated conversion logs. target_ops: Deprecated. Please specify `target_spec.supported_ops` instead. Set of OpsSet options indicating which converter to use. (default set([OpsSet.TFLITE_BUILTINS])) target_spec: Experimental flag, subject to change. Specification of target device. optimizations: Experimental flag, subject to change. A list of optimizations to apply when converting the model. E.g. `[Optimize.DEFAULT]` representative_dataset: A representative dataset that can be used to generate input and output samples for the model. The converter can use the dataset to evaluate different optimizations. experimental_new_converter: Experimental flag, subject to change. Enables MLIR-based conversion instead of TOCO conversion. (default True)
642640TFLiteSavedModelConvertertensorflow/tensorflow/lite/python/lite.py1410classConverts the given SavedModel into TensorFlow Lite model. Attributes: saved_model_dir: Directory of the SavedModel.
643641TFLiteKerasModelConvertertensorflow/tensorflow/lite/python/lite.py1458classConverts the given SavedModel into TensorFlow Lite model.
644642TFLiteFrozenGraphConvertertensorflow/tensorflow/lite/python/lite.py1586classConverts the given frozen graph def into TensorFlow Lite model.
645643TFLiteConvertertensorflow/tensorflow/lite/python/lite.py1634classConvert a TensorFlow model into `output_format`. This is used to convert from a TensorFlow GraphDef, SavedModel or tf.keras model into either a TFLite FlatBuffer or graph visualization. Attributes: inference_type: Target data type of real-number arrays in the output file. Must be `{tf.float32, tf.uint8}`. If `optimzations` are provided, this parameter is ignored. (default tf.float32) inference_input_type: Target data type of real-number input arrays. Allows for a different type for input arrays. If an integer type is provided and `optimizations` are not used, `quantized_inputs_stats` must be provided. If `inference_type` is tf.uint8, signaling conversion to a fully quantized model from a quantization-aware trained input model, then `inference_input_type` defaults to tf.uint8. In all other cases, `inference_input_type` defaults to tf.float32. Must be `{tf.float32, tf.uint8, tf.int8}` inference_output_type: Target data type of real-number output arrays. Allows for a different type for output arrays. If `inference_type` is tf.uint8, signaling conversion to a fully quantized model from a quantization-aware trained output model, then `inference_output_type` defaults to tf.uint8. In all other cases, `inference_output_type` must be tf.float32, an error will be thrown otherwise. Must be `{tf.float32, tf.uint8, tf.int8}` output_format: Output file format. Currently must be `{TFLITE, GRAPHVIZ_DOT}`. (default TFLITE) quantized_input_stats: Dict of strings representing input tensor names mapped to tuple of floats representing the mean and standard deviation of the training data (e.g., {"foo" : (0., 1.)}). Only need if `inference_input_type` is `QUANTIZED_UINT8`. real_input_value = (quantized_input_value - mean_value) / std_dev_value. (default {}) default_ranges_stats: Tuple of integers representing (min, max) range values for all arrays without a specified range. Intended for experimenting with quantization via "dummy quantization". (default None) drop_control_dependency: Boolean indicating whether to drop control dependencies silently. This is due to TFLite not supporting control dependencies. (default True) reorder_across_fake_quant: Boolean indicating whether to reorder FakeQuant nodes in unexpected locations. Used when the location of the FakeQuant nodes is preventing graph transformations necessary to convert the graph. Results in a graph that differs from the quantized training graph, potentially causing differing arithmetic behavior. (default False) change_concat_input_ranges: Boolean to change behavior of min/max ranges for inputs and outputs of the concat operator for quantized models. Changes the ranges of concat operator overlap when true. (default False) allow_custom_ops: Boolean indicating whether to allow custom operations. When false any unknown operation is an error. When true, custom ops are created for any op that is unknown. The developer will need to provide these to the TensorFlow Lite runtime with a custom resolver. (default False) post_training_quantize: Deprecated. Please specify `[Optimize.DEFAULT]` for `optimizations` instead. Boolean indicating whether to quantize the weights of the converted float model. Model size will be reduced and there will be latency improvements (at the cost of accuracy). (default False) dump_graphviz_dir: Full filepath of folder to dump the graphs at various stages of processing GraphViz .dot files. Preferred over --output_format=GRAPHVIZ_DOT in order to keep the requirements of the output file. (default None) dump_graphviz_video: Boolean indicating whether to dump the graph after every graph transformation. (default False) conversion_summary_dir: A string indicating the path to the generated conversion logs. target_ops: Deprecated. Please specify `target_spec.supported_ops` instead. Set of OpsSet options indicating which converter to use. (default set([OpsSet.TFLITE_BUILTINS])) target_spec: Experimental flag, subject to change. Specification of target device. optimizations: Experimental flag, subject to change. A list of optimizations to apply when converting the model. E.g. `[Optimize.DEFAULT]` representative_dataset: A representative dataset that can be used to generate input and output samples for the model. The converter can use the dataset to evaluate different optimizations. experimental_new_converter: Experimental flag, subject to change. Enables MLIR-based conversion instead of TOCO conversion. (default True) Example usage: ```python # Converting a GraphDef from session. converter = tf.compat.v1.TFLiteConverter.from_session( sess, in_tensors, out_tensors) tflite_model = converter.convert() open("converted_model.tflite", "wb").write(tflite_model) # Converting a GraphDef from file. converter = tf.compat.v1.TFLiteConverter.from_frozen_graph( graph_def_file, input_arrays, output_arrays) tflite_model = converter.convert() open("converted_model.tflite", "wb").write(tflite_model) # Converting a SavedModel. converter = tf.compat.v1.TFLiteConverter.from_saved_model(saved_model_dir) tflite_model = converter.convert() open("converted_model.tflite", "wb").write(tflite_model) # Converting a tf.keras model. converter = tf.compat.v1.TFLiteConverter.from_keras_model_file(keras_model) tflite_model = converter.convert() open("converted_model.tflite", "wb").write(tflite_model) ```
646644TocoConvertertensorflow/tensorflow/lite/python/lite.py1979classConvert a TensorFlow model into `output_format` using TOCO. This class has been deprecated. Please use `lite.TFLiteConverter` instead.
647645FromSessionTesttensorflow/tensorflow/lite/python/lite_flex_test.py38class
648646FromConcreteFunctionTesttensorflow/tensorflow/lite/python/lite_flex_test.py103class
649647LiteTesttensorflow/tensorflow/lite/python/lite_test.py59classBase class of all the tests in this module.
650648TestModelstensorflow/tensorflow/lite/python/lite_test.py63class
651649FromConstructortensorflow/tensorflow/lite/python/lite_test.py76class
652650FromSessionTesttensorflow/tensorflow/lite/python/lite_test.py115class
653651FromFrozenGraphFiletensorflow/tensorflow/lite/python/lite_test.py1464class
654652FromFrozenGraphObjectDetectiontensorflow/tensorflow/lite/python/lite_test.py1650class
655653FromSavedModelTesttensorflow/tensorflow/lite/python/lite_test.py1713class
656654MyAddLayertensorflow/tensorflow/lite/python/lite_test.py1897class
657655FromKerasFiletensorflow/tensorflow/lite/python/lite_test.py1912class
658656GrapplerTesttensorflow/tensorflow/lite/python/lite_test.py2292class
659657ImportOpsUtilTesttensorflow/tensorflow/lite/python/lite_test.py2384class
660658DefaultConverterAttrsTesttensorflow/tensorflow/lite/python/lite_test.py2390class
661659FromConcreteFunctionTesttensorflow/tensorflow/lite/python/lite_v2_test.py48class
662660FromSavedModelTesttensorflow/tensorflow/lite/python/lite_v2_test.py498class
663661FromKerasModelTesttensorflow/tensorflow/lite/python/lite_v2_test.py709class
664662ControlFlowTesttensorflow/tensorflow/lite/python/lite_v2_test.py825class
665663GrapplerTesttensorflow/tensorflow/lite/python/lite_v2_test.py1013class
666664UnknownShapestensorflow/tensorflow/lite/python/lite_v2_test.py1047class
667665ModelTesttensorflow/tensorflow/lite/python/lite_v2_test_util.py34classBase test class for TensorFlow Lite 2.x model tests.
668666OpHinttensorflow/tensorflow/lite/python/op_hint.py97classA class that helps build tflite function invocations. It allows you to take a bunch of TensorFlow ops and annotate the construction such that toco knows how to convert it to tflite. This embeds a pseudo function in a TensorFlow graph. This allows embedding high-level API usage information in a lower level TensorFlow implementation so that an alternative implementation can be substituted later. Essentially, any "input" into this pseudo op is fed into an identity, and attributes are added to that input before being used by the constituent ops that make up the pseudo op. A similar process is done to any output that is to be exported from the current op.
669667_LiteOperandtensorflow/tensorflow/lite/python/op_hint.py471classAbstract operand for a tflite hint function._dynamic_rnn_loop. This is a base class that handles representing arguments to an OpHint. It also is able to serialize operands to the stubbed graph_def. Child classes are responsible for being able to store information about the hint identity operators. They are also responsible for knowing how to serialize to output graphdefs. Typically this will be implemented by holding one or more identity nodes that were previously discovered as hints.
670668_LiteSingleOperandtensorflow/tensorflow/lite/python/op_hint.py518classA simple operand that is non-aggregated (i.e. most hints).
671669_LiteAggregateOperandtensorflow/tensorflow/lite/python/op_hint.py544classAn operand for a tflite hint function that is aggregated from many. For example, an LSTM is a grid of operators that are all related. Inputs going into them may need to be fused, so they should all be tracked as related arguments.
672670_LiteFuncCalltensorflow/tensorflow/lite/python/op_hint.py670classRepresent a TensorFlow Lite custom function. This is uses to accumulate found hints in the graphdef into a single conceptual unit. Attributes: inputs: inputs to the op (hash from index # to argument) outputs: outputs to the op (hash from index # to argument) function_name: the tflite custom op name to use uuid: a unique call id for this particular call (i.e. multiple function calls would have the same function_name but different uuids. params: A param name to key value for op constant data. I.e. for axis on a reduction, strides on a convolution, etc. level: Level of the OpHint. children_inputs_mappings: If the Ophint has children, children inputs mappings indicate how their inputs & outputs are mapped.
673671_find_all_hints_in_nodestensorflow/tensorflow/lite/python/op_hint.py730functionLook at the all the input nodes and return a list of LiteFuncCall objs. Args: nodes: A TensorFlow graph_def to look for LiteFuncCalls. Returns: a list of `LifeFuncCall` objects in the form
674672_extract_topology_sequence_mappingtensorflow/tensorflow/lite/python/op_hint.py795function
675673_find_children_hints_in_while_looptensorflow/tensorflow/lite/python/op_hint.py800functionFind children hints and all nodes inside the while loop. Args: function_def: Function def of the while loop. nodes_mapping: While loop input_arg : real node name. Returns: Ordered children hints and all re-mapped nodes inside the while loop.
676674_find_children_hintstensorflow/tensorflow/lite/python/op_hint.py833functionFind all children hints. For a given OpHint, we find all children hints inside it, we also copy all the nodes inside function defs (if applicable) to the original graph_def, they are returned in a list as well. Args: call: Parent OpHint that contains children ophints. graph_def: Original graph def. Returns: Ordered children hints inside the parent ophint; new graph def that contains nodes inside function defs (if applicable); nodes inside function defs.
677675_tensor_name_basetensorflow/tensorflow/lite/python/op_hint.py887functionRemoves the device assignment code from a tensor. e.g. _tensor_name_base("foo:3") => "foo" Args: full_tensor_name: A tensor name that is annotated with a device placement (this is what tensor flow introspection gives). Returns: A name without any device assignment.
678676_tensorflow_output_nametensorflow/tensorflow/lite/python/op_hint.py904function
679677_check_subgraph_closedtensorflow/tensorflow/lite/python/op_hint.py910functionChecks to make sure node only connects to predecessor graph through inputs. Args: n: Node to check reachable_by_input: Nodes that are reachable by all inputs of subgraph input_nodes_set: The set of nodes that are "inputs". name_to_input_name: Maps from name to the list of inputs. Raises: TypeError: If the given node uses items past inputs directly.
680678_convert_single_op_hint_to_stubtensorflow/tensorflow/lite/python/op_hint.py940functionGiven a graph_def, converts `call` into a stub and returns a new graph_def. Args: call: A single function call to be converted. graph_def: A graph_def to use as input (that has call obviously). function_def_nodes: Nodes inside the function def those are not connected to the graph. is_last_run: Whether it is the last run for a given pass (for OpHint has children). Returns: A new transformed graph-def that has call as a stub (single op). Note: after this process, the graph_def can no longer be loaded into the tensorflow runtime, so all future manipulations are done in graph_def level.
681679_remove_one_redundant_stack_unstacktensorflow/tensorflow/lite/python/op_hint.py1070functionRemoves a stack->unstack pattern from in_graph_def in a returned graph. Args: in_graph_def: Graph def to use as input. Returns: Simplified tuple (graph_def, changed_something) where changed_something is true if anything was done.
682680_remove_redundant_stack_unstacktensorflow/tensorflow/lite/python/op_hint.py1161function
683681_get_correct_mappingtensorflow/tensorflow/lite/python/op_hint.py1170function
684682_convert_op_hints_to_stubs_helpertensorflow/tensorflow/lite/python/op_hint.py1180functionConverts a graph_def to a new graph_def where all op hints are stubbed. Args: graph_def: A graph def that we should convert. write_callback: A function pointer that can be used to write intermediate steps of graph transformation (optional). Returns: A new stubbed graph_def.
685683find_all_hinted_output_nodestensorflow/tensorflow/lite/python/op_hint.py1257functionFind all Ophints output nodes in the graph. This is used to get all the output nodes those are ophinted, it is important for operation like convert_variables_to_constants keep all ophints structure. Note: only one of session or graph_def should be used, not both. Why this can be useful? Some TensorFlow ops (e.g. bidirectional rnn), can generate multiple outputs for unfused subgraph. If not all output nodes are consumed, graph optimization can potentially drop the unused nodes and cause ophints in an invalid states (due to missing ophinted output nodes). So it's important for us to find all those hinted output nodes and make sure they're not discarded away. Args: session: A TensorFlow session that contains the graph to convert. graph_def: A graph def that we should convert. Returns: A list of OpHints output nodes. Raises: ValueError: If both session and graph_def are provided.
686684is_ophint_convertedtensorflow/tensorflow/lite/python/op_hint.py1292function
687685convert_op_hints_to_stubstensorflow/tensorflow/lite/python/op_hint.py1305functionConverts a graphdef with LiteOp hints into stub operations. This is used to prepare for toco conversion of complex intrinsic usages. Note: only one of session or graph_def should be used, not both. Args: session: A TensorFlow session that contains the graph to convert. graph_def: A graph def that we should convert. write_callback: A function pointer that can be used to write intermediate steps of graph transformation (optional). Returns: A new graphdef with all ops contained in OpHints being replaced by a single op call with the right parameters. Raises: ValueError: If both session and graph_def are provided.
688686_parse_arraytensorflow/tensorflow/lite/python/tflite_convert.py39function
689687_parse_settensorflow/tensorflow/lite/python/tflite_convert.py45function
690688_parse_inference_typetensorflow/tensorflow/lite/python/tflite_convert.py51functionConverts the inference type to the value of the constant. Args: value: str representing the inference type. flag: str representing the flag name. Returns: tf.dtype. Raises: ValueError: Unsupported value.
691689_get_tflite_convertertensorflow/tensorflow/lite/python/tflite_convert.py74functionMakes a TFLiteConverter object based on the flags provided. Args: flags: argparse.Namespace object containing TFLite flags. Returns: TFLiteConverter object. Raises: ValueError: Invalid flags.
692690_convert_tf1_modeltensorflow/tensorflow/lite/python/tflite_convert.py122functionCalls function to convert the TensorFlow 1.X model into a TFLite model. Args: flags: argparse.Namespace object. Raises: ValueError: Invalid flags.
693691_convert_tf2_modeltensorflow/tensorflow/lite/python/tflite_convert.py219functionCalls function to convert the TensorFlow 2.0 model into a TFLite model. Args: flags: argparse.Namespace object. Raises: ValueError: Unsupported file format.
694692_check_tf1_flagstensorflow/tensorflow/lite/python/tflite_convert.py244functionChecks the parsed and unparsed flags to ensure they are valid in 1.X. Raises an error if previously support unparsed flags are found. Raises an error for parsed flags that don't meet the required conditions. Args: flags: argparse.Namespace object containing TFLite flags. unparsed: List of unparsed flags. Raises: ValueError: Invalid flags.
695693_check_tf2_flagstensorflow/tensorflow/lite/python/tflite_convert.py313functionChecks the parsed and unparsed flags to ensure they are valid in 2.X. Args: flags: argparse.Namespace object containing TFLite flags. Raises: ValueError: Invalid flags.
696694_get_tf1_flagstensorflow/tensorflow/lite/python/tflite_convert.py327functionReturns ArgumentParser for tflite_convert for TensorFlow 1.X. Args: parser: ArgumentParser
697695_get_tf2_flagstensorflow/tensorflow/lite/python/tflite_convert.py511functionReturns ArgumentParser for tflite_convert for TensorFlow 2.0. Args: parser: ArgumentParser
698696_ParseExperimentalNewConvertertensorflow/tensorflow/lite/python/tflite_convert.py535classHelper class to parse --experimental_new_converter argument.
699697_get_parsertensorflow/tensorflow/lite/python/tflite_convert.py565functionReturns an ArgumentParser for tflite_convert. Args: use_v2_converter: Indicates which converter to return. Return: ArgumentParser.
700698run_maintensorflow/tensorflow/lite/python/tflite_convert.py596functionMain in tflite_convert.py.
701699maintensorflow/tensorflow/lite/python/tflite_convert.py639function
702700TestModelstensorflow/tensorflow/lite/python/tflite_convert_test.py45class
703701TfLiteConvertV1Testtensorflow/tensorflow/lite/python/tflite_convert_test.py81class
704702TfLiteConvertV2Testtensorflow/tensorflow/lite/python/tflite_convert_test.py298class
705703ArgParserTesttensorflow/tensorflow/lite/python/tflite_convert_test.py339class
706704convert_dtype_to_tflite_typetensorflow/tensorflow/lite/python/util.py59functionConverts tf.dtype to TFLite proto type. Args: tf_dtype: tf.dtype Raises: ValueError: Unsupported tf.dtype. Returns: types_flag_pb2.
707705get_tensor_nametensorflow/tensorflow/lite/python/util.py77functionReturns name of the input tensor. Args: tensor: tf.Tensor Returns: str
708706get_tensors_from_tensor_namestensorflow/tensorflow/lite/python/util.py98functionGets the Tensors associated with the `tensor_names` in the provided graph. Args: graph: TensorFlow Graph. tensor_names: List of strings that represent names of tensors in the graph. Returns: A list of Tensor objects in the same order the names are provided. Raises: ValueError: tensor_names contains an invalid tensor name.
709707set_tensor_shapestensorflow/tensorflow/lite/python/util.py141functionSets Tensor shape for each tensor if the shape is defined. Args: tensors: TensorFlow ops.Tensor. shapes: Dict of strings representing input tensor names to list of integers representing input shapes (e.g., {"foo": : [1, 16, 16, 3]}). Raises: ValueError: `shapes` contains an invalid tensor. `shapes` contains an invalid shape for a valid tensor.
710708get_grappler_configtensorflow/tensorflow/lite/python/util.py172functionCreates a tf.compat.v1.ConfigProto for configuring Grappler. Args: optimizers_list: List of strings that represents the list of optimizers. Returns: tf.ConfigProto.
711709run_graph_optimizationstensorflow/tensorflow/lite/python/util.py188functionApply standard TensorFlow optimizations to the graph_def. Args: graph_def: Frozen GraphDef to be optimized. input_arrays: List of arrays that are considered inputs of the graph. output_arrays: List of arrays that are considered outputs of the graph. config: tf.ConfigProto. graph: TensorFlow Graph. Required when Eager mode is enabled. (default None) Returns: A new, optimized GraphDef.
712710_convert_op_hints_if_presenttensorflow/tensorflow/lite/python/util.py230function
713711freeze_graphtensorflow/tensorflow/lite/python/util.py241functionReturns a frozen GraphDef. Runs a Grappler pass and freezes a graph with Variables in it. Otherwise the existing GraphDef is returned. The Grappler pass is only run on models that are frozen in order to inline the functions in the graph. If OpHints is present, it will try to convert the OpHint graph. Args: sess: TensorFlow Session. input_tensors: List of input tensors. output_tensors: List of output tensors (only .name is used from this). Returns: Frozen GraphDef.
714712is_frozen_graphtensorflow/tensorflow/lite/python/util.py281functionDetermines if the graph is frozen. Determines if a graph has previously been frozen by checking for any operations of type Variable*. If variables are found, the graph is not frozen. Args: sess: TensorFlow Session. Returns: Bool.
715713build_debug_info_functensorflow/tensorflow/lite/python/util.py300functionReturns a method to retrieve the `GraphDebugInfo` from the original graph. Args: original_graph: The original `Graph` containing all the op stack traces. Returns: A function which retrieves the stack traces from the original graph and converts them to a `GraphDebugInfo` for a given set of nodes.
716714convert_debug_info_functensorflow/tensorflow/lite/python/util.py339functionReturns a method to retrieve the `GraphDebugInfo` from the original graph. Args: saved_debug_info: The `GraphDebugInfo` containing all the debug info. Returns: A function which retrieves the stack traces from the original graph and converts them to a `GraphDebugInfo` for a given set of nodes.
717715get_debug_infotensorflow/tensorflow/lite/python/util.py368functionReturns the debug info for the original nodes in the `converted_graph`. Args: nodes_to_debug_info_func: The method to collect the op debug info for the nodes. converted_graph: A `GraphDef` after optimization and transformation. Returns: `GraphDebugInfo` for all the original nodes in `converted_graph`.
718716convert_bytes_to_c_sourcetensorflow/tensorflow/lite/python/util.py399functionReturns strings representing a C constant array containing `data`. Args: data: Byte array that will be converted into a C constant. array_name: String to use as the variable name for the constant array. max_line_width: The longest line length, for formatting purposes. include_guard: Name to use for the include guard macro definition. include_path: Optional path to include in the source file. use_tensorflow_license: Whether to include the standard TensorFlow Apache2 license in the generated files. Returns: Text that can be compiled as a C source file to link in the data as a literal array of values. Text that can be used as a C header file to reference the literal array.
719717UtilTesttensorflow/tensorflow/lite/python/util_test.py39class
720718TensorFunctionsTesttensorflow/tensorflow/lite/python/util_test.py124class
721719wrapped_toco_converttensorflow/tensorflow/lite/python/wrap_toco.py29functionWraps TocoConvert with lazy loader.
722720wrapped_get_potentially_supported_opstensorflow/tensorflow/lite/python/wrap_toco.py41functionWraps TocoGetPotentiallySupportedOps with lazy loader.
723721wrapped_experimental_mlir_quantizetensorflow/tensorflow/lite/python/wrap_toco.py46functionWraps experimental mlir quantize model.
724722wrapped_experimental_mlir_sparsifytensorflow/tensorflow/lite/python/wrap_toco.py55functionWraps experimental mlir sparsify model.
725723Calibratortensorflow/tensorflow/lite/python/optimize/calibrator.py33classCalibrates a floating point model and then quantizes it. This is an internal class, not a public interface.
726724CalibratorTesttensorflow/tensorflow/lite/python/optimize/calibrator_test.py33class
727725TemporaryDirectoryResourcetensorflow/tensorflow/lite/schema/upgrade_schema.py57function
728726Convertertensorflow/tensorflow/lite/schema/upgrade_schema.py65classConverts TensorFlow flatbuffer models from old to new version of schema. This can convert between any version to the latest version. It uses an incremental upgrade strategy to go from version to version. Usage: converter = Converter() converter.Convert("a.tflite", "a.json") converter.Convert("b.json", "b.tflite")
729727maintensorflow/tensorflow/lite/schema/upgrade_schema.py344function
730728JsonDumpAndFlushtensorflow/tensorflow/lite/schema/upgrade_schema_test.py242functionWrite the dictionary `data` to a JSON file `fp` (and flush). Args: data: in a dictionary that is JSON serializable. fp: File-like object
731729TestSchemaUpgradetensorflow/tensorflow/lite/schema/upgrade_schema_test.py253class
732730maintensorflow/tensorflow/lite/testing/generate_examples.py100function
733731MultiGenStatetensorflow/tensorflow/lite/testing/generate_examples_lib.py176classState of multiple set generation process. This state class stores the information needed when generating the examples for multiple test set. The stored informations are open archive object to be shared, information on test target for current iteration of generation, accumulated generation results.
734732Optionstensorflow/tensorflow/lite/testing/generate_examples_lib.py203classAll options for example generation.
735733_prepare_dirtensorflow/tensorflow/lite/testing/generate_examples_lib.py244function
736734generate_examplestensorflow/tensorflow/lite/testing/generate_examples_lib.py256functionGenerate examples for a test set. Args: options: Options containing information to generate examples. Raises: RuntimeError: if the test function cannot be found.
737735generate_multi_set_examplestensorflow/tensorflow/lite/testing/generate_examples_lib.py294functionGenerate examples for test sets. Args: options: Options containing information to generate examples. test_sets: List of the name of test sets to generate examples.
738736make_report_tabletensorflow/tensorflow/lite/testing/generate_examples_report.py32functionMake an HTML report of the success/failure reports. Args: fp: File-like object in which to put the html. title: "Title of the zip file this pertains to." reports: a list of conversion attempts. (report_args, report_vals) i.e. ({"shape": [1,2,3], "type": "tf.float32"}, {"tf": "SUCCESS", "toco": "FAILURE", "toco_log": "Unsupported type.", "tf_log": ""})
739737toco_optionstensorflow/tensorflow/lite/testing/toco_convert.py31functionCreate TOCO options to process a model. Args: data_types: input and inference types used by TOCO. input_arrays: names of the input tensors output_arrays: name of the output tensors shapes: shapes of the input tensors extra_toco_options: additional toco options Returns: the options in a string.
740738toco_converttensorflow/tensorflow/lite/testing/toco_convert.py78functionConvert a model's graph def into a tflite model. NOTE: this currently shells out to the toco binary, but we would like convert to Python API tooling in the future. Args: options: An Options instance. graph_def: A GraphDef object. input_tensors: List of input tensor tuples `(name, shape, type)`. output_tensors: List of output tensors (names). **kwargs: Extra options to be passed. Returns: output tflite model, log_txt from conversion or None, log_txt if it did not convert properly.
741739register_make_test_functiontensorflow/tensorflow/lite/testing/zip_test_utils.py55function
742740get_test_functiontensorflow/tensorflow/lite/testing/zip_test_utils.py65functionGet the test function according to the test function name.
743741ExtraTocoOptionstensorflow/tensorflow/lite/testing/zip_test_utils.py88classAdditional toco options besides input, output, shape.
744742create_tensor_datatensorflow/tensorflow/lite/testing/zip_test_utils.py106functionBuild tensor data spreading the range [min_value, max_value).
745743create_scalar_datatensorflow/tensorflow/lite/testing/zip_test_utils.py126functionBuild scalar tensor data range from min_value to max_value exclusively.
746744freeze_graphtensorflow/tensorflow/lite/testing/zip_test_utils.py144functionFreeze the current graph. Args: session: Tensorflow sessions containing the graph outputs: List of output tensors Returns: The frozen graph_def.
747745format_resulttensorflow/tensorflow/lite/testing/zip_test_utils.py158functionConvert a tensor to a format that can be used in test specs.
748746write_examplestensorflow/tensorflow/lite/testing/zip_test_utils.py168functionGiven a list `examples`, write a text format representation. The file format is csv like with a simple repeated pattern. We would ike to use proto here, but we can't yet due to interfacing with the Android team using this format. Args: fp: File-like object to write to. examples: Example dictionary consisting of keys "inputs" and "outputs"
749747write_test_casestensorflow/tensorflow/lite/testing/zip_test_utils.py196functionGiven a dictionary of `examples`, write a text format representation. The file format is protocol-buffer-like, even though we don't use proto due to the needs of the Android team. Args: fp: File-like object to write to. model_name: Filename where the model was written to, relative to filename. examples: Example dictionary consisting of keys "inputs" and "outputs"
750748get_input_shapes_maptensorflow/tensorflow/lite/testing/zip_test_utils.py225functionGets a map of input names to shapes. Args: input_tensors: List of input tensor tuples `(name, shape, type)`. Returns: {string : list of integers}.
751749_normalize_output_nametensorflow/tensorflow/lite/testing/zip_test_utils.py251functionRemove :0 suffix from tensor names.
752750make_zip_of_teststensorflow/tensorflow/lite/testing/zip_test_utils.py262functionHelper to make a zip file of a bunch of TensorFlow models. This does a cartesian product of the dictionary of test_parameters and calls make_graph() for each item in the cartesian product set. If the graph is built successfully, then make_test_inputs() is called to build expected input/output value pairs. The model is then converted to tflite with toco, and the examples are serialized with the tflite model into a zip file (2 files per item in the cartesian product set). Args: options: An Options instance. test_parameters: Dictionary mapping to lists for each parameter. e.g. `{"strides": [[1,3,3,1], [1,2,2,1]], "foo": [1.2, 1.3]}` make_graph: function that takes current parameters and returns tuple `[input1, input2, ...], [output1, output2, ...]` make_test_inputs: function taking `curr_params`, `session`, `input_tensors`, `output_tensors` and returns tuple `(input_values, output_values)`. extra_toco_options: Additional toco options. use_frozen_graph: Whether or not freeze graph before toco converter. expected_tf_failures: Number of times tensorflow is expected to fail in executing the input graphs. In some cases it is OK for TensorFlow to fail because the one or more combination of parameters is invalid. Raises: RuntimeError: if there are converter errors that can't be ignored.
753751get_filepathtensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py47functionReturns the full path of the filename. Args: filename: Subdirectory and name of the model file. base_dir: Base directory containing model file. Returns: str.
754752get_imagetensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py63functionReturns an image loaded into an np.ndarray with dims [1, size, size, 3]. Args: size: Size of image. Returns: np.ndarray.
755753_converttensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py80functionConverts the model. Args: converter: TFLiteConverter object. **kwargs: Additional arguments to be passed into the converter. Supported flags are {"target_ops", "post_training_quantize", "quantize_to_float16"}. Returns: The converted TFLite model in serialized format. Raises: ValueError: Invalid version number.
756754_get_tflite_interpretertensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py103functionCreates a TFLite interpreter with resized input tensors. Args: tflite_model: Serialized TensorFlow Lite model. input_shapes_resize: A map where the key is the input tensor name and the value is the shape of the input tensor. This resize happens after model conversion, prior to calling allocate tensors. (default None) Returns: lite.Interpreter
757755_get_input_data_maptensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py127functionGenerates a map of input data based on the TFLite model. Args: tflite_model: Serialized TensorFlow Lite model. input_data: List of np.ndarray. Returns: {str: [np.ndarray]}.
758756_generate_random_input_datatensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py146functionGenerates input data based on the input tensors in the TFLite model. Args: tflite_model: Serialized TensorFlow Lite model. seed: Integer seed for the random generator. (default None) input_data_range: A map where the key is the input tensor name and the value is a tuple (min_val, max_val) which specifies the value range of the corresponding input tensor. For example, '{'input1': (1, 5)}' means to generate a random value for tensor `input1` within range [1.0, 5.0) (half-inclusive). (default None) input_shapes_resize: A map where the key is the input tensor name and the value is the shape of the input tensor. This resize happens after model conversion, prior to calling allocate tensors. (default None) Returns: ([np.ndarray], {str : [np.ndarray]}).
759757_evaluate_tflite_modeltensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py191functionReturns evaluation of input data on TFLite model. Args: tflite_model: Serialized TensorFlow Lite model. input_data: List of np.ndarray. input_shapes_resize: A map where the key is the input tensor name and the value is the shape of the input tensor. This resize happens after model conversion, prior to calling allocate tensors. (default None) Returns: List of np.ndarray.
760758evaluate_frozen_graphtensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py222functionReturns a function that evaluates the frozen graph on input data. Args: filename: Full filepath of file containing frozen GraphDef. input_arrays: List of input tensors to freeze graph with. output_arrays: List of output tensors to freeze graph with. Returns: Lambda function ([np.ndarray data] : [np.ndarray result]).
761759evaluate_saved_modeltensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py260functionReturns a function that evaluates the SavedModel on input data. Args: directory: SavedModel directory to convert. tag_set: Set of tags identifying the MetaGraphDef within the SavedModel to analyze. All tags in the tag set must be present. signature_key: Key identifying SignatureDef containing inputs and outputs. Returns: Lambda function ([np.ndarray data] : [np.ndarray result]).
762760evaluate_keras_modeltensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py286functionReturns a function that evaluates the tf.keras model on input data. Args: filename: Full filepath of HDF5 file containing the tf.keras model. Returns: Lambda function ([np.ndarray data] : [np.ndarray result]).
763761compare_modelstensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py299functionCompares TensorFlow and TFLite models. Unless the input data is provided, the models are compared with random data. Args: tflite_model: Serialized TensorFlow Lite model. tf_eval_func: Lambda function that takes in input data and outputs the results of the TensorFlow model ([np.ndarray data] : [np.ndarray result]). input_shapes_resize: A map where the key is the input tensor name and the value is the shape of the input tensor. This resize happens after model conversion, prior to calling allocate tensors. (default None) input_data: np.ndarray to pass into models during inference. (default None) input_data_range: A map where the key is the input tensor name and the value is a tuple (min_val, max_val) which specifies the value range of the corresponding input tensor. For example, '{'input1': (1, 5)}' means to generate a random value for tensor `input1` within range [1.0, 5.0) (half-inclusive). (default None) tolerance: Decimal place to check accuracy to. (default 5).
764762compare_models_v2tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py336functionCompares TensorFlow and TFLite models for TensorFlow 2.0. Unless the input data is provided, the models are compared with random data. Currently only 1 input and 1 output are supported by this function. Args: tflite_model: Serialized TensorFlow Lite model. tf_eval_func: Function to evaluate TensorFlow model. Either a lambda function that takes in input data and outputs the results or a TensorFlow ConcreteFunction. input_data: np.ndarray to pass into models during inference. (default None). input_data_range: A map where the key is the input tensor name and the value is a tuple (min_val, max_val) which specifies the value range of the corresponding input tensor. For example, '{'input1': (1, 5)}' means to generate a random value for tensor `input1` within range [1.0, 5.0) (half-inclusive). (default None) tolerance: Decimal place to check accuracy to. (default 5)
765763test_frozen_graph_quanttensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py390functionSanity check to validate post quantize flag alters the graph. This test does not check correctness of the converted model. It converts the TensorFlow frozen graph to TFLite with and without the post_training_quantized flag. It ensures some tensors have different types between the float and quantized models in the case of an all TFLite model or mix-and-match model. It ensures tensor types do not change in the case of an all Flex model. Args: filename: Full filepath of file containing frozen GraphDef. input_arrays: List of input tensors to freeze graph with. output_arrays: List of output tensors to freeze graph with. input_shapes: Dict of strings representing input tensor names to list of integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}). Automatically determined when input shapes is None (e.g., {"foo" : None}). (default None) **kwargs: Additional arguments to be passed into the converter. Raises: ValueError: post_training_quantize flag doesn't act as intended.
766764test_frozen_graphtensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py459functionValidates the TensorFlow frozen graph converts to a TFLite model. Converts the TensorFlow frozen graph to TFLite and checks the accuracy of the model on random data. Args: filename: Full filepath of file containing frozen GraphDef. input_arrays: List of input tensors to freeze graph with. output_arrays: List of output tensors to freeze graph with. input_shapes: Dict of strings representing input tensor names to list of integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}). Automatically determined when input shapes is None (e.g., {"foo" : None}). (default None) input_shapes_resize: A map where the key is the input tensor name and the value is the shape of the input tensor. This resize happens after model conversion, prior to calling allocate tensors. (default None) input_data: np.ndarray to pass into models during inference. (default None). input_data_range: A map where the key is the input tensor name and the value is a tuple (min_val, max_val) which specifies the value range of the corresponding input tensor. For example, '{'input1': (1, 5)}' means to generate a random value for tensor `input1` within range [1.0, 5.0) (half-inclusive). (default None) **kwargs: Additional arguments to be passed into the converter.
767765test_saved_modeltensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py504functionValidates the TensorFlow SavedModel converts to a TFLite model. Converts the TensorFlow SavedModel to TFLite and checks the accuracy of the model on random data. Args: directory: SavedModel directory to convert. input_shapes: Dict of strings representing input tensor names to list of integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}). Automatically determined when input shapes is None (e.g., {"foo" : None}). (default None) tag_set: Set of tags identifying the MetaGraphDef within the SavedModel to analyze. All tags in the tag set must be present. signature_key: Key identifying SignatureDef containing inputs and outputs. input_data: np.ndarray to pass into models during inference. (default None). input_data_range: A map where the key is the input tensor name and the value is a tuple (min_val, max_val) which specifies the value range of the corresponding input tensor. For example, '{'input1': (1, 5)}' means to generate a random value for tensor `input1` within range [1.0, 5.0) (half-inclusive). (default None) **kwargs: Additional arguments to be passed into the converter.
768766test_saved_model_v2tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py548functionValidates the TensorFlow SavedModel converts to a TFLite model. Converts the TensorFlow SavedModel to TFLite and checks the accuracy of the model on random data. Args: directory: SavedModel directory to convert. tag_set: Set of tags identifying the MetaGraphDef within the SavedModel to analyze. All tags in the tag set must be present. signature_key: Key identifying SignatureDef containing inputs and outputs. input_data: np.ndarray to pass into models during inference. (default None). input_data_range: A map where the key is the input tensor name and the value is a tuple (min_val, max_val) which specifies the value range of the corresponding input tensor. For example, '{'input1': (1, 5)}' means to generate a random value for tensor `input1` within range [1.0, 5.0) (half-inclusive). (default None) **kwargs: Additional arguments to be passed into the converter.
769767test_saved_model_v2_quant_float16tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py587functionValidates the TensorFlow SavedModel converts to a TFLite model.
770768test_keras_modeltensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py623functionValidates the tf.keras model converts to a TFLite model. Converts the tf.keras model to TFLite and checks the accuracy of the model on random data. Args: filename: Full filepath of HDF5 file containing the tf.keras model. input_arrays: List of input tensors to freeze graph with. input_shapes: Dict of strings representing input tensor names to list of integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}). Automatically determined when input shapes is None (e.g., {"foo" : None}). (default None) input_data: np.ndarray to pass into models during inference. (default None). input_data_range: A map where the key is the input tensor name and the value is a tuple (min_val, max_val) which specifies the value range of the corresponding input tensor. For example, '{'input1': (1, 5)}' means to generate a random value for tensor `input1` within range [1.0, 5.0) (half-inclusive). (default None) **kwargs: Additional arguments to be passed into the converter.
771769test_keras_model_v2tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py661functionValidates the tf.keras model converts to a TFLite model. Converts the tf.keras model to TFLite and checks the accuracy of the model on random data. Args: filename: Full filepath of HDF5 file containing the tf.keras model. input_shapes: List of list of integers representing input shapes in the order of the tf.keras model's .input attribute (e.g., [[1, 16, 16, 3]]). (default None) input_data: np.ndarray to pass into models during inference. (default None). input_data_range: A map where the key is the input tensor name and the value is a tuple (min_val, max_val) which specifies the value range of the corresponding input tensor. For example, '{'input1': (1, 5)}' means to generate a random value for tensor `input1` within range [1.0, 5.0) (half-inclusive). (default None) **kwargs: Additional arguments to be passed into the converter.
772770EvaluateFrozenGraphtensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib_test.py42class
773771EvaluateSavedModeltensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib_test.py142class
774772EvaluateKerasModeltensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib_test.py160class
775773make_abs_teststensorflow/tensorflow/lite/testing/op_tests/abs.py28functionMake a set of tests to do abs.
776774make_add_n_teststensorflow/tensorflow/lite/testing/op_tests/add_n.py27functionMake a set of tests for AddN op.
777775make_arg_min_max_teststensorflow/tensorflow/lite/testing/op_tests/arg_min_max.py29functionMake a set of tests to do arg_max.
778776make_batch_to_space_nd_teststensorflow/tensorflow/lite/testing/op_tests/batch_to_space_nd.py28functionMake a set of tests to do batch_to_space_nd.
779777make_binary_op_teststensorflow/tensorflow/lite/testing/op_tests/binary_op.py26functionMake a set of tests to do binary ops with and without broadcast.
780778make_binary_op_tests_functensorflow/tensorflow/lite/testing/op_tests/binary_op.py239functionReturn a function that does a test on a binary operator.
781779make_add_teststensorflow/tensorflow/lite/testing/op_tests/binary_op.py245function
782780make_div_teststensorflow/tensorflow/lite/testing/op_tests/binary_op.py250functionMake zip tests for div op with 5D case.
783781make_sub_teststensorflow/tensorflow/lite/testing/op_tests/binary_op.py267functionMake zip tests for sub op with additional cases.
784782make_mul_teststensorflow/tensorflow/lite/testing/op_tests/binary_op.py287function
785783make_pow_teststensorflow/tensorflow/lite/testing/op_tests/binary_op.py292function
786784make_floor_div_teststensorflow/tensorflow/lite/testing/op_tests/binary_op.py297function
787785make_floor_mod_teststensorflow/tensorflow/lite/testing/op_tests/binary_op.py302function
788786make_squared_difference_teststensorflow/tensorflow/lite/testing/op_tests/binary_op.py307function
789787make_cast_teststensorflow/tensorflow/lite/testing/op_tests/cast.py27functionGenerate examples for cast.
790788make_ceil_teststensorflow/tensorflow/lite/testing/op_tests/ceil.py27functionMake a set of tests to do ceil.
791789make_concat_teststensorflow/tensorflow/lite/testing/op_tests/concat.py27functionMake a set of tests to do concatenation.
792790make_constant_teststensorflow/tensorflow/lite/testing/op_tests/constant.py31functionMake a set of tests to do constant ops.
793791make_conv_teststensorflow/tensorflow/lite/testing/op_tests/conv.py28functionMake a set of tests to do convolution.
794792make_conv2d_transpose_teststensorflow/tensorflow/lite/testing/op_tests/conv2d_transpose.py28functionMake a set of tests to do transpose_conv.
795793make_conv_activation_teststensorflow/tensorflow/lite/testing/op_tests/conv_activation.py27functionMake a set of tests to do convolution with activation.
796794make_conv_relu6_teststensorflow/tensorflow/lite/testing/op_tests/conv_activation.py132functionMake a set of tests to do conv_relu6.
797795make_conv_relu_teststensorflow/tensorflow/lite/testing/op_tests/conv_activation.py138functionMake a set of tests to do conv_relu.
798796relu1tensorflow/tensorflow/lite/testing/op_tests/conv_activation.py143function
799797make_conv_relu1_teststensorflow/tensorflow/lite/testing/op_tests/conv_activation.py151functionMake a set of tests to do conv_relu1.
800798make_conv_to_depthwiseconv_with_shared_weights_teststensorflow/tensorflow/lite/testing/op_tests/conv_to_depthwiseconv_with_shared_weights.py28functionMake a test where 2 Conv ops shared the same constant weight tensor.
801799make_conv_with_shared_weights_teststensorflow/tensorflow/lite/testing/op_tests/conv_with_shared_weights.py28functionMake a test where 2 Conv ops shared the same constant weight tensor.
802800make_cos_teststensorflow/tensorflow/lite/testing/op_tests/cos.py28functionMake a set of tests to do cos.
803801make_depth_to_space_teststensorflow/tensorflow/lite/testing/op_tests/depth_to_space.py27functionMake a set of tests to do depth_to_space.
804802make_depthwiseconv_teststensorflow/tensorflow/lite/testing/op_tests/depthwiseconv.py28functionMake a set of tests to do convolution.
805803_make_elementwise_teststensorflow/tensorflow/lite/testing/op_tests/elementwise.py26functionMake a set of tests to do element-wise operations.
806804make_sin_teststensorflow/tensorflow/lite/testing/op_tests/elementwise.py57functionMake a set of tests to do sin.
807805make_log_teststensorflow/tensorflow/lite/testing/op_tests/elementwise.py63functionMake a set of tests to do log.
808806make_sqrt_teststensorflow/tensorflow/lite/testing/op_tests/elementwise.py69functionMake a set of tests to do sqrt.
809807make_rsqrt_teststensorflow/tensorflow/lite/testing/op_tests/elementwise.py75functionMake a set of tests to do 1/sqrt.
810808make_square_teststensorflow/tensorflow/lite/testing/op_tests/elementwise.py81functionMake a set of tests to do square.
811809make_elu_teststensorflow/tensorflow/lite/testing/op_tests/elu.py28functionMake a set of tests to do (float) tf.nn.elu.
812810make_embedding_lookup_teststensorflow/tensorflow/lite/testing/op_tests/embedding_lookup.py27functionMake a set of tests to do gather.
813811make_equal_teststensorflow/tensorflow/lite/testing/op_tests/equal.py27functionMake a set of tests to do equal.
814812make_exp_teststensorflow/tensorflow/lite/testing/op_tests/exp.py27functionMake a set of tests to do exp.
815813make_expand_dims_teststensorflow/tensorflow/lite/testing/op_tests/expand_dims.py28functionMake a set of tests to do expand_dims.
816814make_eye_teststensorflow/tensorflow/lite/testing/op_tests/eye.py28functionMake a set of tests for tf.eye op.
817815make_fill_teststensorflow/tensorflow/lite/testing/op_tests/fill.py28functionMake a set of tests to do fill.
818816make_floor_teststensorflow/tensorflow/lite/testing/op_tests/floor.py27functionMake a set of tests to do floor.
819817make_fully_connected_teststensorflow/tensorflow/lite/testing/op_tests/fully_connected.py28functionMake a set of tests to do fully_connected.
820818make_fused_batch_norm_teststensorflow/tensorflow/lite/testing/op_tests/fused_batch_norm.py27functionMake a set of tests to do fused_batch_norm.
821819make_gather_teststensorflow/tensorflow/lite/testing/op_tests/gather.py27functionMake a set of tests to do gather.
822820make_gather_nd_teststensorflow/tensorflow/lite/testing/op_tests/gather_nd.py27functionMake a set of tests to do gather_nd.
823821make_gather_with_constant_teststensorflow/tensorflow/lite/testing/op_tests/gather_with_constant.py28functionMake a set of test which feed a constant to gather toco.
824822make_global_batch_norm_teststensorflow/tensorflow/lite/testing/op_tests/global_batch_norm.py27functionMake a set of tests to do batch_norm_with_global_normalization.
825823make_greater_teststensorflow/tensorflow/lite/testing/op_tests/greater.py27functionMake a set of tests to do greater.
826824make_greater_equal_teststensorflow/tensorflow/lite/testing/op_tests/greater_equal.py27functionMake a set of tests to do greater_equal.
827825_tflite_convert_verify_num_opstensorflow/tensorflow/lite/testing/op_tests/hardswish.py29functionVerifies that the result of the conversion is a single op.
828826make_hardswish_teststensorflow/tensorflow/lite/testing/op_tests/hardswish.py47functionMake a set of tests to do hardswish.
829827make_identity_teststensorflow/tensorflow/lite/testing/op_tests/identity.py29functionMake a set of tests to do identity.
830828make_l2norm_teststensorflow/tensorflow/lite/testing/op_tests/l2norm.py28functionMake a set of tests to do l2norm.
831829make_l2norm_shared_epsilon_teststensorflow/tensorflow/lite/testing/op_tests/l2norm_shared_epsilon.py28functionRegression test for a bug (b/122651451).
832830make_leaky_relu_teststensorflow/tensorflow/lite/testing/op_tests/leaky_relu.py28functionMake a set of tests to do LeakyRelu.
833831make_less_teststensorflow/tensorflow/lite/testing/op_tests/less.py27functionMake a set of tests to do less.
834832make_less_equal_teststensorflow/tensorflow/lite/testing/op_tests/less_equal.py27functionMake a set of tests to do less_equal.
835833make_local_response_norm_teststensorflow/tensorflow/lite/testing/op_tests/local_response_norm.py28functionMake a set of tests to do local_response_norm.
836834make_log_softmax_teststensorflow/tensorflow/lite/testing/op_tests/log_softmax.py27functionMake a set of tests to do log_softmax.
837835_make_logical_teststensorflow/tensorflow/lite/testing/op_tests/logic.py26functionMake a set of tests to do logical operations.
838836make_logical_or_teststensorflow/tensorflow/lite/testing/op_tests/logic.py65functionMake a set of tests to do logical_or.
839837make_logical_and_teststensorflow/tensorflow/lite/testing/op_tests/logic.py71functionMake a set of tests to do logical_and.
840838make_logical_xor_teststensorflow/tensorflow/lite/testing/op_tests/logic.py77functionMake a set of tests to do logical_xor, test logical_not as well.
841839make_lstm_teststensorflow/tensorflow/lite/testing/op_tests/lstm.py29functionMake a set of tests to do basic Lstm cell.
842840make_matrix_diag_teststensorflow/tensorflow/lite/testing/op_tests/matrix_diag.py27functionMake a set of tests for tf.linalg.diag op.
843841make_matrix_set_diag_teststensorflow/tensorflow/lite/testing/op_tests/matrix_set_diag.py27functionMake a set of tests for tf.linalg.set_diag op.
844842make_maximum_teststensorflow/tensorflow/lite/testing/op_tests/maximum.py27functionMake a set of tests to do maximum.
845843make_minimum_teststensorflow/tensorflow/lite/testing/op_tests/minimum.py27functionMake a set of tests to do minimum.
846844make_mirror_pad_teststensorflow/tensorflow/lite/testing/op_tests/mirror_pad.py28functionMake a set of tests to do mirror_pad.
847845make_nearest_upsample_teststensorflow/tensorflow/lite/testing/op_tests/nearest_upsample.py27functionMake a set of tests to do nearest_upsample.
848846make_neg_teststensorflow/tensorflow/lite/testing/op_tests/neg.py27functionMake a set of tests to do neg.
849847make_not_equal_teststensorflow/tensorflow/lite/testing/op_tests/not_equal.py27functionMake a set of tests to do not equal.
850848make_one_hot_teststensorflow/tensorflow/lite/testing/op_tests/one_hot.py27functionMake a set of tests to do one_hot.
851849make_pack_teststensorflow/tensorflow/lite/testing/op_tests/pack.py28functionMake a set of tests to do stack.
852850make_pad_teststensorflow/tensorflow/lite/testing/op_tests/pad.py28functionMake a set of tests to do pad.
853851make_padv2_teststensorflow/tensorflow/lite/testing/op_tests/padv2.py28functionMake a set of tests to do padv2.
854852make_placeholder_with_default_teststensorflow/tensorflow/lite/testing/op_tests/placeholder_with_default.py28functionMake a set of tests to test placeholder_with_default.
855853make_pool_teststensorflow/tensorflow/lite/testing/op_tests/pool.py26functionMake a set of tests to do average pooling. Args: pool_op_in: TensorFlow pooling operation to test i.e. `tf.nn.avg_pool2d`. allow_fully_quantize: bool, whether fully_quantize is allowed. Returns: A function representing the true generator (after curried pool_op_in).
856854make_l2_pooltensorflow/tensorflow/lite/testing/op_tests/pool.py119functionGiven an input perform a sequence of TensorFlow ops to produce l2pool.
857855make_l2_pool_teststensorflow/tensorflow/lite/testing/op_tests/pool.py131function
858856make_avg_pool_teststensorflow/tensorflow/lite/testing/op_tests/pool.py136function
859857make_max_pool_teststensorflow/tensorflow/lite/testing/op_tests/pool.py143function
860858make_prelu_teststensorflow/tensorflow/lite/testing/op_tests/prelu.py28functionMake a set of tests to do PReLU.
861859make_range_teststensorflow/tensorflow/lite/testing/op_tests/range.py27functionMake a set of tests to do range.
862860make_rank_teststensorflow/tensorflow/lite/testing/op_tests/rank.py27functionMake a set of tests to do rank.
863861make_reduce_teststensorflow/tensorflow/lite/testing/op_tests/reduce.py27functionMake a set of tests to do reduce operation. Args: reduce_op: TensorFlow reduce operation to test, i.e. `tf.reduce_mean`. min_value: min value for created tensor data. max_value: max value for created tensor data. boolean_tensor_only: If true, will only generate tensor with boolean value. allow_fully_quantize: bool, whether fully_quantize is allowed. Returns: a function representing the true generator with `reduce_op_in` curried.
864862make_mean_teststensorflow/tensorflow/lite/testing/op_tests/reduce.py219functionMake a set of tests to do mean.
865863make_sum_teststensorflow/tensorflow/lite/testing/op_tests/reduce.py231functionMake a set of tests to do sum.
866864make_reduce_prod_teststensorflow/tensorflow/lite/testing/op_tests/reduce.py243functionMake a set of tests to do prod.
867865make_reduce_max_teststensorflow/tensorflow/lite/testing/op_tests/reduce.py250functionMake a set of tests to do max.
868866make_reduce_min_teststensorflow/tensorflow/lite/testing/op_tests/reduce.py258functionMake a set of tests to do min.
869867make_reduce_any_teststensorflow/tensorflow/lite/testing/op_tests/reduce.py266functionMake a set of tests to do any.
870868make_relu_teststensorflow/tensorflow/lite/testing/op_tests/relu.py28functionMake a set of tests to do relu.
871869make_relu1_teststensorflow/tensorflow/lite/testing/op_tests/relu1.py28functionMake a set of tests to do relu1.
872870make_relu6_teststensorflow/tensorflow/lite/testing/op_tests/relu6.py28functionMake a set of tests to do relu6.
873871make_reshape_teststensorflow/tensorflow/lite/testing/op_tests/reshape.py28functionMake a set of tests to do reshape.
874872make_resize_bilinear_teststensorflow/tensorflow/lite/testing/op_tests/resize_bilinear.py27functionMake a set of tests to do resize_bilinear.
875873make_resize_nearest_neighbor_teststensorflow/tensorflow/lite/testing/op_tests/resize_nearest_neighbor.py27functionMake a set of tests to do resize_nearest_neighbor.
876874make_resolve_constant_strided_slice_teststensorflow/tensorflow/lite/testing/op_tests/resolve_constant_strided_slice.py29functionMake a set of tests to show strided_slice yields incorrect results.
877875make_reverse_sequence_teststensorflow/tensorflow/lite/testing/op_tests/reverse_sequence.py27functionMake a set of tests to do reverse_sequence.
878876make_reverse_v2_teststensorflow/tensorflow/lite/testing/op_tests/reverse_v2.py27functionMake a set of tests to do reverse_v2.
879877make_rfft2d_teststensorflow/tensorflow/lite/testing/op_tests/rfft2d.py28functionMake a set of tests to do rfft2d.
880878make_round_teststensorflow/tensorflow/lite/testing/op_tests/round.py27functionBuild the round op testing graph.
881879make_scatter_nd_teststensorflow/tensorflow/lite/testing/op_tests/scatter_nd.py28functionMake a set of tests to do scatter_nd.
882880make_shape_teststensorflow/tensorflow/lite/testing/op_tests/shape.py28functionMake a set of tests to do shape.
883881make_sigmoid_teststensorflow/tensorflow/lite/testing/op_tests/sigmoid.py27functionMake a set of tests to do sigmoid.
884882make_slice_teststensorflow/tensorflow/lite/testing/op_tests/slice.py29functionMake a set of tests to do slice.
885883make_softmax_teststensorflow/tensorflow/lite/testing/op_tests/softmax.py27functionMake a set of tests to do softmax.
886884make_space_to_batch_nd_teststensorflow/tensorflow/lite/testing/op_tests/space_to_batch_nd.py28functionMake a set of tests to do space_to_batch_nd.
887885make_space_to_depth_teststensorflow/tensorflow/lite/testing/op_tests/space_to_depth.py27functionMake a set of tests to do space_to_depth.
888886make_sparse_to_dense_teststensorflow/tensorflow/lite/testing/op_tests/sparse_to_dense.py29functionMake a set of tests to do sparse to dense.
889887make_split_teststensorflow/tensorflow/lite/testing/op_tests/split.py28functionMake a set of tests to do tf.split.
890888make_splitv_teststensorflow/tensorflow/lite/testing/op_tests/splitv.py28functionMake a set of tests to do tf.split_v.
891889make_squeeze_teststensorflow/tensorflow/lite/testing/op_tests/squeeze.py27functionMake a set of tests to do squeeze.
892890make_squeeze_transpose_teststensorflow/tensorflow/lite/testing/op_tests/squeeze_transpose.py27functionMake a set of tests to do squeeze followed by transpose.
893891_make_strided_slice_teststensorflow/tensorflow/lite/testing/op_tests/strided_slice.py28functionUtility function to make strided_slice_tests based on parameters.
894892make_strided_slice_teststensorflow/tensorflow/lite/testing/op_tests/strided_slice.py100functionMake a set of tests to do strided_slice.
895893make_strided_slice_1d_exhaustive_teststensorflow/tensorflow/lite/testing/op_tests/strided_slice.py208functionMake a set of exhaustive tests for 1D strided_slice.
896894make_strided_slice_np_style_teststensorflow/tensorflow/lite/testing/op_tests/strided_slice_np_style.py29functionMake a set of tests to test strided_slice in np style.
897895make_tanh_teststensorflow/tensorflow/lite/testing/op_tests/tanh.py28functionMake a set of tests to do tanh.
898896make_tile_teststensorflow/tensorflow/lite/testing/op_tests/tile.py27functionMake a set of tests to do tile.
899897make_topk_teststensorflow/tensorflow/lite/testing/op_tests/topk.py28functionMake a set of tests to do topk.
900898make_transpose_teststensorflow/tensorflow/lite/testing/op_tests/transpose.py28functionMake a set of tests to do transpose.
901899make_transpose_conv_teststensorflow/tensorflow/lite/testing/op_tests/transpose_conv.py33functionMake a set of tests to do transpose_conv.
902900make_unfused_gru_teststensorflow/tensorflow/lite/testing/op_tests/unfused_gru.py27functionMake a set of tests for unfused gru op.
903901make_unidirectional_sequence_lstm_teststensorflow/tensorflow/lite/testing/op_tests/unidirectional_sequence_lstm.py29functionMake a set of tests to do unidirectional_sequence_lstm.
904902make_unidirectional_sequence_rnn_teststensorflow/tensorflow/lite/testing/op_tests/unidirectional_sequence_rnn.py29functionMake a set of tests to do unidirectional_sequence_rnn.
905903make_unique_teststensorflow/tensorflow/lite/testing/op_tests/unique.py27functionMake a set of tests for Unique op.
906904make_unpack_teststensorflow/tensorflow/lite/testing/op_tests/unpack.py27functionMake a set of tests to do unpack.
907905make_unroll_batch_matmul_teststensorflow/tensorflow/lite/testing/op_tests/unroll_batch_matmul.py27functionMake a set of tests to test unroll_batch_matmul.
908906make_where_teststensorflow/tensorflow/lite/testing/op_tests/where.py27functionMake a set of tests to do where.
909907make_zeros_like_teststensorflow/tensorflow/lite/testing/op_tests/zeros_like.py27functionMake a set of tests to do zeros_like.
910908html_escapetensorflow/tensorflow/lite/toco/logging/gen_html.py37function
911909get_input_type_from_signaturetensorflow/tensorflow/lite/toco/logging/gen_html.py41functionParses op_signature and returns a string denoting the input tensor type. Args: op_signature: a string specifying the signature of a particular operator. The signature of an operator contains the input tensor's shape and type, output tensor's shape and type, operator's name and its version. It has the following schema: INPUT:input_1_shape::input_1_type::input_2_shape::input_2_type::.. ::OUTPUT:output_1_shape::output_1_type::output_2_shape::output_2_type:: ..::NAME:operator_name ::VERSION:operator_version An example of an operator signature is: INPUT:[1,73,73,160]::float::[64,1,1,160]::float::[64]::float:: OUTPUT:[1,73,73,64]::float::NAME:Conv::VERSION:1 Returns: A string denoting the input tensors' type. In the form of shape/type separated by comma. For example: shape:[1,73,73,160],type:float,shape:[64,1,1,160],type:float,shape:[64], type:float
912910get_operator_typetensorflow/tensorflow/lite/toco/logging/gen_html.py78function
913911HTMLGeneratortensorflow/tensorflow/lite/toco/logging/gen_html.py87classUtility class to generate an HTML report.
914912gen_conversion_log_htmltensorflow/tensorflow/lite/toco/logging/gen_html.py208functionGenerates an HTML report about the conversion process. Args: conversion_log_dir: A string specifying the file directory of the conversion logs. It's required that before calling this function, the `conversion_log_dir` already contains the following files: `toco_log_before.pb`, `toco_log_after.pb`, `toco_tf_graph.dot`, `toco_tflite_graph.dot`. quantization_enabled: A boolean, passed from the tflite converter to indicate whether post-training quantization is enabled during conversion. tflite_graph_path: A string, the filepath to the converted TFLite model. Raises: IOError: When any of the required files doesn't exist.
915913GenHtmlTesttensorflow/tensorflow/lite/toco/logging/gen_html_test.py32class
916914executetensorflow/tensorflow/lite/toco/python/toco_from_protos.py32functionRuns the converter.
917915maintensorflow/tensorflow/lite/toco/python/toco_from_protos.py61function
918916TensorNametensorflow/tensorflow/lite/toco/python/toco_from_protos_test.py30functionGet the canonical (non foo:0 name).
919917TocoFromProtosTesttensorflow/tensorflow/lite/toco/python/toco_from_protos_test.py35class
920918get_imagetensorflow/tensorflow/lite/tools/convert_image_to_csv.py41functionReturns an image loaded into an np.ndarray with dims [height, width, (3 or 1)]. Args: width: Width to rescale the image to. height: Height to rescale the image to. want_grayscale: Whether the result should be converted to grayscale. filepath: Path of the image file.. Returns: np.ndarray of shape (height, width, channels) where channels is 1 if want_grayscale is true, otherwise 3.
921919array_to_int_csvtensorflow/tensorflow/lite/tools/convert_image_to_csv.py65functionConverts all elements in a numerical array to a comma-separated string. Args: array_data: Numerical array to convert. Returns: String containing array values as integers, separated by commas.
922920run_maintensorflow/tensorflow/lite/tools/convert_image_to_csv.py79functionApplication run loop.
923921maintensorflow/tensorflow/lite/tools/convert_image_to_csv.py110function
924922ConvertImageToCsvTesttensorflow/tensorflow/lite/tools/convert_image_to_csv_test.py34class
925923convert_bytearray_to_objecttensorflow/tensorflow/lite/tools/flatbuffer_utils.py38functionConverts a tflite model from a bytearray to an object for parsing.
926924read_modeltensorflow/tensorflow/lite/tools/flatbuffer_utils.py44functionReads a tflite model as a python object. Args: input_tflite_file: Full path name to the input tflite file Raises: RuntimeError: If input_tflite_file path is invalid. IOError: If input_tflite_file cannot be opened. Returns: A python object corresponding to the input tflite file.
927925read_model_with_mutable_tensorstensorflow/tensorflow/lite/tools/flatbuffer_utils.py64functionReads a tflite model as a python object with mutable tensors. Similar to read_model() with the addition that the returned object has mutable tensors (read_model() returns an object with immutable tensors). Args: input_tflite_file: Full path name to the input tflite file Raises: RuntimeError: If input_tflite_file path is invalid. IOError: If input_tflite_file cannot be opened. Returns: A mutable python object corresponding to the input tflite file.
928926convert_object_to_bytearraytensorflow/tensorflow/lite/tools/flatbuffer_utils.py83functionConverts a tflite model from an object to a bytearray.
929927write_modeltensorflow/tensorflow/lite/tools/flatbuffer_utils.py93functionWrites the tflite model, a python object, into the output file. Args: model_object: A tflite model as a python object output_tflite_file: Full path name to the output tflite file. Raises: IOError: If output_tflite_file path is invalid or cannot be opened.
930928strip_stringstensorflow/tensorflow/lite/tools/flatbuffer_utils.py108functionStrips all nonessential strings from the model to reduce model size. We remove the following strings: (find strings by searching ":string" in the tensorflow lite flatbuffer schema) 1. Model description 2. SubGraph name 3. Tensor names We retain OperatorCode custom_code and Metadata name. Args: model: The model from which to remove nonessential strings.
931929randomize_weightstensorflow/tensorflow/lite/tools/flatbuffer_utils.py130functionRandomize weights in a model. Args: model: The model in which to randomize weights. random_seed: The input to the random number generator (default value is 0).
932930WriteReadModelTesttensorflow/tensorflow/lite/tools/flatbuffer_utils_test.py29class
933931StripStringsTesttensorflow/tensorflow/lite/tools/flatbuffer_utils_test.py74class
934932RandomizeWeightsTesttensorflow/tensorflow/lite/tools/flatbuffer_utils_test.py119class
935933maintensorflow/tensorflow/lite/tools/randomize_weights.py34function
936934maintensorflow/tensorflow/lite/tools/strip_strings.py34functionApplication run loop.
937935build_mock_flatbuffer_modeltensorflow/tensorflow/lite/tools/test_utils.py30functionCreates a flatbuffer containing an example model.
938936load_model_from_flatbuffertensorflow/tensorflow/lite/tools/test_utils.py211functionLoads a model as a python object from a flatbuffer model.
939937build_mock_modeltensorflow/tensorflow/lite/tools/test_utils.py218functionCreates an object containing an example model.
940938TensorTypeToNametensorflow/tensorflow/lite/tools/visualize.py202functionConverts a numerical enum to a readable tensor type.
941939BuiltinCodeToNametensorflow/tensorflow/lite/tools/visualize.py210functionConverts a builtin op code enum to a readable name.
942940NameListToStringtensorflow/tensorflow/lite/tools/visualize.py218functionConverts a list of integers to the equivalent ASCII string.
943941OpCodeMappertensorflow/tensorflow/lite/tools/visualize.py229classMaps an opcode index to an op name.
944942DataSizeMappertensorflow/tensorflow/lite/tools/visualize.py245classFor buffers, report the number of bytes.
945943TensorMappertensorflow/tensorflow/lite/tools/visualize.py255classMaps a list of tensor indices to a tooltip hoverable indicator of more.
946944GenerateGraphtensorflow/tensorflow/lite/tools/visualize.py278functionProduces the HTML required to have a d3 visualization of the dag.
947945GenerateTableHtmltensorflow/tensorflow/lite/tools/visualize.py337functionGiven a list of object values and keys to print, make an HTML table. Args: items: Items to print an array of dicts. keys_to_print: (key, display_fn). `key` is a key in the object. i.e. items[0][key] should exist. display_fn is the mapping function on display. i.e. the displayed html cell will have the string returned by `mapping_fn(items[0][key])`. display_index: add a column which is the index of each row in `items`. Returns: An html table.
948946CamelCaseToSnakeCasetensorflow/tensorflow/lite/tools/visualize.py375functionConverts an identifier in CamelCase to snake_case.
949947FlatbufferToDicttensorflow/tensorflow/lite/tools/visualize.py381functionConverts a hierarchy of FB objects into a nested dict. We avoid transforming big parts of the flat buffer into python arrays. This speeds conversion from ten minutes to a few seconds on big graphs. Args: fb: a flat buffer structure. (i.e. ModelT) preserve_as_numpy: true if all downstream np.arrays should be preserved. false if all downstream np.array should become python arrays Returns: A dictionary representing the flatbuffer rather than a flatbuffer object.
950948CreateDictFromFlatbuffertensorflow/tensorflow/lite/tools/visualize.py413function
951949CreateHtmlFiletensorflow/tensorflow/lite/tools/visualize.py419functionGiven a tflite model in `tflite_input` file, produce html description.
952950maintensorflow/tensorflow/lite/tools/visualize.py506function
953951VisualizeTesttensorflow/tensorflow/lite/tools/visualize_test.py29class
954952maintensorflow/tensorflow/lite/tools/zip_files.py32function
955953_get_ground_truth_detectionstensorflow/tensorflow/lite/tools/evaluation/tasks/coco_object_detection/preprocess_coco_minival.py44functionProcesses the annotations JSON file and returns ground truth data corresponding to allowlisted image IDs. Args: instances_file: COCO instances JSON file, usually named as instances_val20xx.json. allowlist_file: File containing COCO minival image IDs to allowlist for evaluation, one per line. num_images: Number of allowlisted images to pre-process. First num_images are chosen based on sorted list of filenames. If None, all allowlisted files are preprocessed. Returns: A dict mapping image id (int) to a per-image dict that contains: 'filename', 'image' & 'height' mapped to filename & image dimensions respectively AND 'detections' to a list of detection dicts, with each mapping: 'category_id' to COCO category id (starting with 1) & 'bbox' to a list of dimension-normalized [top, left, bottom, right] bounding-box values.
956954_dump_datatensorflow/tensorflow/lite/tools/evaluation/tasks/coco_object_detection/preprocess_coco_minival.py145functionDumps images & data from ground-truth objects into output_folder_path. The following are created in output_folder_path: images/: sub-folder for allowlisted validation images. ground_truth.pb: A binary proto file containing all ground-truth object-sets. Args: ground_truth_detections: A dict mapping image id to ground truth data. Output of _get_ground_truth_detections. images_folder_path: Validation images folder output_folder_path: folder to output files to.
957955_parse_argstensorflow/tensorflow/lite/tools/evaluation/tasks/coco_object_detection/preprocess_coco_minival.py190functionCreates a parser that parse the command line arguments. Returns: A namespace parsed from command line arguments.
958956_synset_to_wordtensorflow/tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification/generate_validation_labels.py30functionReturns synset to word dictionary by reading sysnset arrays.
959957_validation_file_pathtensorflow/tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification/generate_validation_labels.py50function
960958_synset_array_pathtensorflow/tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification/generate_validation_labels.py54function
961959_generate_validation_labelstensorflow/tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification/generate_validation_labels.py58function
962960_check_argumentstensorflow/tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification/generate_validation_labels.py67function
963961maintensorflow/tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification/generate_validation_labels.py80function
964962maintensorflow/tensorflow/lite/tools/optimize/python/modify_model_interface.py38functionApplication run loop.
965963_parse_type_to_inttensorflow/tensorflow/lite/tools/optimize/python/modify_model_interface_lib.py27functionConverts a tflite type to it's integer representation. Args: dtype: tf.DType representing the inference type. flag: str representing the flag name. Returns: integer, a tflite TensorType enum value. Raises: ValueError: Unsupported tflite type.
966964modify_model_interfacetensorflow/tensorflow/lite/tools/optimize/python/modify_model_interface_lib.py52functionModify a quantized model's interface (input/output) from float to integer. Args: input_file: Full path name to the input tflite file. output_file: Full path name to the output tflite file. input_type: Final input interface type. output_type: Final output interface type. Raises: RuntimeError: If the modification of the model interface was unsuccessful. ValueError: If the input_type or output_type is unsupported.
967965build_tflite_model_with_full_integer_quantizationtensorflow/tensorflow/lite/tools/optimize/python/modify_model_interface_lib_test.py31function
968966ModifyModelInterfaceTesttensorflow/tensorflow/lite/tools/optimize/python/modify_model_interface_lib_test.py56class
969967FormatConverterTesttensorflow/tensorflow/lite/tools/optimize/sparsity/python/format_converter_extension_test.py28class
970968get_build_cpustensorflow/tensorflow/lite/tools/pip_package/setup.py72function
971969make_argstensorflow/tensorflow/lite/tools/pip_package/setup.py80functionConstruct make command line.
972970make_outputtensorflow/tensorflow/lite/tools/pip_package/setup.py94functionInvoke make on the target and return output.
973971maketensorflow/tensorflow/lite/tools/pip_package/setup.py99functionInvoke make to build tflite C++ sources. Build dependencies: apt-get install swig libjpeg-dev zlib1g-dev python3-dev python3-nump
974972download_dependenciestensorflow/tensorflow/lite/tools/pip_package/setup.py108functionDownload build dependencies if haven't done yet.
975973CustomBuildExttensorflow/tensorflow/lite/tools/pip_package/setup.py114classCustomized build extension.
976974CustomBuildPytensorflow/tensorflow/lite/tools/pip_package/setup.py130class
977975get_pybind_includetensorflow/tensorflow/lite/tools/pip_package/setup.py137functionpybind11 include directory is not correctly resolved. This fixes include directory to /usr/local/pythonX.X Returns: include directories to find pybind11
978976set_signature_defstensorflow/tensorflow/lite/tools/signature/signature_def_utils.py25functionSets SignatureDefs to the Metadata of a TfLite flatbuffer buffer. Args: tflite_model: Binary TFLite model (bytes or bytes-like object) to which to add signature_def. signature_def_map: dict containing SignatureDefs to store in metadata. Returns: buffer: A TFLite model binary identical to model buffer with metadata field containing SignatureDef. Raises: ValueError: tflite_model buffer does not contain a valid TFLite model. signature_def_map is empty or does not contain a SignatureDef.
979977get_signature_defstensorflow/tensorflow/lite/tools/signature/signature_def_utils.py51functionGet SignatureDef dict from the Metadata of a TfLite flatbuffer buffer. Args: tflite_model: TFLite model buffer to get the signature_def. Returns: dict containing serving names to SignatureDefs if exists, otherwise, empty dict. Raises: ValueError: tflite_model buffer does not contain a valid TFLite model. DecodeError: SignatureDef cannot be parsed from TfLite SignatureDef metadata.
980978clear_signature_defstensorflow/tensorflow/lite/tools/signature/signature_def_utils.py78functionClears SignatureDefs from the Metadata of a TfLite flatbuffer buffer. Args: tflite_model: TFLite model buffer to remove signature_defs. Returns: buffer: A TFLite model binary identical to model buffer with no SignatureDef metadata. Raises: ValueError: tflite_model buffer does not contain a valid TFLite model.
981979SignatureDefUtilsTesttensorflow/tensorflow/lite/tools/signature/signature_def_utils_test.py30class
982980read32tensorflow/tensorflow/lite/tutorials/dataset.py35functionRead 4 bytes from bytestream as an unsigned 32-bit integer.
983981check_image_file_headertensorflow/tensorflow/lite/tutorials/dataset.py41functionValidate that filename corresponds to images for the MNIST dataset.
984982check_labels_file_headertensorflow/tensorflow/lite/tutorials/dataset.py57functionValidate that filename corresponds to labels for the MNIST dataset.
985983downloadtensorflow/tensorflow/lite/tutorials/dataset.py67functionDownload (and unzip) a file from the MNIST dataset if not already done.
986984datasettensorflow/tensorflow/lite/tutorials/dataset.py86functionDownload and parse MNIST dataset.
987985traintensorflow/tensorflow/lite/tutorials/dataset.py114functiontf.data.Dataset object for MNIST training data.
988986testtensorflow/tensorflow/lite/tutorials/dataset.py120functiontf.data.Dataset object for MNIST test data.
989987test_image_generatortensorflow/tensorflow/lite/tutorials/mnist_tflite.py35function
990988run_evaltensorflow/tensorflow/lite/tutorials/mnist_tflite.py47functionPerforms evaluation for input image over specified model. Args: interpreter: TFLite interpreter initialized with model to execute. input_image: Image input to the model. Returns: output: output tensor of model being executed.
991989maintensorflow/tensorflow/lite/tutorials/mnist_tflite.py72function
992990set_dlopen_flagstensorflow/tensorflow/python/pywrap_dlopen_global_flags.py43function
993991reset_dlopen_flagstensorflow/tensorflow/python/pywrap_dlopen_global_flags.py48function
994992import_graphdeftensorflow/tensorflow/python/pywrap_mlir.py26function
995993experimental_convert_saved_model_to_mlirtensorflow/tensorflow/python/pywrap_mlir.py32function
996994experimental_convert_saved_model_v1_to_mlirtensorflow/tensorflow/python/pywrap_mlir.py39function
997995experimental_run_pass_pipelinetensorflow/tensorflow/python/pywrap_mlir.py48function
998996enabletensorflow/tensorflow/python/tf2.py30function
999997disabletensorflow/tensorflow/python/tf2.py36function
1000998enabledtensorflow/tensorflow/python/tf2.py42function
1001999AssertTransformertensorflow/tensorflow/python/autograph/converters/asserts.py27classTransforms Assert nodes to Call so they can be handled as functions.
10021000transformtensorflow/tensorflow/python/autograph/converters/asserts.py50function
10031001AssertsTesttensorflow/tensorflow/python/autograph/converters/asserts_test.py30class
10041002_Breaktensorflow/tensorflow/python/autograph/converters/break_statements.py29class
10051003BreakTransformertensorflow/tensorflow/python/autograph/converters/break_statements.py39classCanonicalizes break statements into additional conditionals.
10061004transformtensorflow/tensorflow/python/autograph/converters/break_statements.py183function
10071005BreakCanonicalizationTesttensorflow/tensorflow/python/autograph/converters/break_statements_test.py27class
10081006_Functiontensorflow/tensorflow/python/autograph/converters/call_trees.py40class
10091007_ArgTemplateBuildertensorflow/tensorflow/python/autograph/converters/call_trees.py51classConstructs a tuple representing the positional arguments in a call. Example (yes, it's legal Python 3): f(*args1, b, *args2, c, d) -> args1 + (b,) + args2 + (c, d)
10101008CallTreeTransformertensorflow/tensorflow/python/autograph/converters/call_trees.py96classTransforms the call tree by renaming transformed symbols.
10111009transformtensorflow/tensorflow/python/autograph/converters/call_trees.py211functionTransform function call to the compiled counterparts. Args: node: AST ctx: EntityContext Returns: A tuple (node, new_names): node: The transformed AST new_names: set(string), containing any newly-generated names
10121010MockConvertedCalltensorflow/tensorflow/python/autograph/converters/call_trees_test.py30class
10131011CallTreesTesttensorflow/tensorflow/python/autograph/converters/call_trees_test.py42class
10141012ConditionalExpressionTransformertensorflow/tensorflow/python/autograph/converters/conditional_expressions.py28classConverts conditional expressions to functional form.
10151013transformtensorflow/tensorflow/python/autograph/converters/conditional_expressions.py48function
10161014ConditionalExpressionsTesttensorflow/tensorflow/python/autograph/converters/conditional_expressions_test.py26class
10171015_Continuetensorflow/tensorflow/python/autograph/converters/continue_statements.py29class
10181016_Blocktensorflow/tensorflow/python/autograph/converters/continue_statements.py40classTracks information about lexical blocks as they are visited in the AST. Mainly, this object tracks the creation of block guards that replace `continue` statements (e.g. `if not continue_:`). Attributes: create_guard_current: bool, whether to create a guard for the current statement. create_guard_next: bool, whether to create a guard for the next statement. is_loop_type: bool, whether this block is the body of a loop.
10191017ContinueCanonicalizationTransformertensorflow/tensorflow/python/autograph/converters/continue_statements.py60classCanonicalizes continue statements into additional conditionals.
10201018transformtensorflow/tensorflow/python/autograph/converters/continue_statements.py163function
10211019ContinueCanonicalizationTesttensorflow/tensorflow/python/autograph/converters/continue_statements_test.py27class
10221020_Functiontensorflow/tensorflow/python/autograph/converters/control_flow.py41class
10231021ControlFlowTransformertensorflow/tensorflow/python/autograph/converters/control_flow.py46classTransforms control flow structures like loops an conditionals.
10241022AnnotatedDeftensorflow/tensorflow/python/autograph/converters/control_flow.py395class
10251023transformtensorflow/tensorflow/python/autograph/converters/control_flow.py402function
10261024ControlFlowTransformertensorflow/tensorflow/python/autograph/converters/control_flow_deprecated_py2.py44classTransforms control flow structures like loops an conditionals.
10271025AnnotatedDeftensorflow/tensorflow/python/autograph/converters/control_flow_deprecated_py2.py623class
10281026transformtensorflow/tensorflow/python/autograph/converters/control_flow_deprecated_py2.py630function
10291027ControlFlowTestBasetensorflow/tensorflow/python/autograph/converters/control_flow_test.py43class
10301028NestedControlFlowTesttensorflow/tensorflow/python/autograph/converters/control_flow_test.py59class
10311029WhileStatementTesttensorflow/tensorflow/python/autograph/converters/control_flow_test.py106class
10321030IfStatementTesttensorflow/tensorflow/python/autograph/converters/control_flow_test.py352class
10331031ForStatementTesttensorflow/tensorflow/python/autograph/converters/control_flow_test.py598class
10341032AdvancedControlFlowTesttensorflow/tensorflow/python/autograph/converters/control_flow_test.py688class
10351033_LoopScopetensorflow/tensorflow/python/autograph/converters/directives.py48class
10361034_map_argstensorflow/tensorflow/python/autograph/converters/directives.py55functionMaps AST call nodes to the actual function's arguments. Args: call_node: ast.Call function: Callable[..., Any], the actual function matching call_node Returns: Dict[Text, ast.AST], mapping each of the function's argument names to the respective AST node. Raises: ValueError: if the default arguments are not correctly set
10371035DirectivesTransformertensorflow/tensorflow/python/autograph/converters/directives.py90classParses compiler directives and converts them into AST annotations.
10381036transformtensorflow/tensorflow/python/autograph/converters/directives.py180function
10391037DirectivesTesttensorflow/tensorflow/python/autograph/converters/directives_test.py28class
10401038_Functiontensorflow/tensorflow/python/autograph/converters/functions.py32class
10411039FunctionTransformertensorflow/tensorflow/python/autograph/converters/functions.py38classWraps function bodies around autograph-specific boilerplate.
10421040transformtensorflow/tensorflow/python/autograph/converters/functions.py134function
10431041FunctionTransformertensorflow/tensorflow/python/autograph/converters/functions_test.py31class
10441042ListCompTransformertensorflow/tensorflow/python/autograph/converters/list_comprehensions.py42classLowers list comprehensions into standard control flow.
10451043transformtensorflow/tensorflow/python/autograph/converters/list_comprehensions.py81function
10461044ListCompTesttensorflow/tensorflow/python/autograph/converters/list_comprehensions_test.py26class
10471045_Statementtensorflow/tensorflow/python/autograph/converters/lists.py45class
10481046ListTransformertensorflow/tensorflow/python/autograph/converters/lists.py51classConverts lists and related operations to their TF counterpart.
10491047transformtensorflow/tensorflow/python/autograph/converters/lists.py239function
10501048ListTesttensorflow/tensorflow/python/autograph/converters/lists_test.py33class
10511049LogicalExpressionTransformertensorflow/tensorflow/python/autograph/converters/logical_expressions.py49classConverts logical expressions to corresponding TF calls.
10521050transformtensorflow/tensorflow/python/autograph/converters/logical_expressions.py135function
10531051LogicalExpressionTesttensorflow/tensorflow/python/autograph/converters/logical_expressions_test.py28class
10541052_RewriteBlocktensorflow/tensorflow/python/autograph/converters/return_statements.py37class
10551053ConditionalReturnRewritertensorflow/tensorflow/python/autograph/converters/return_statements.py43classRewrites a a pattern where it's unobvious that all paths return a value. This rewrite allows avoiding intermediate None return values. The following pattern: if cond: <block 1> return else: <block 2> <block 3> is converted to: if cond: <block 1> return else: <block 2> <block 3> and vice-versa (if the else returns, subsequent statements are moved under the if branch).
10561054_Blocktensorflow/tensorflow/python/autograph/converters/return_statements.py159class
10571055_Functiontensorflow/tensorflow/python/autograph/converters/return_statements.py172class
10581056ReturnStatementsTransformertensorflow/tensorflow/python/autograph/converters/return_statements.py183classLowers return statements into variables and conditionals. Specifically, the following pattern: <block 1> return val <block 2> is converted to: do_return = False retval = None <block 1> do_return = True retval = val if not do_return: <block 2> return retval The conversion adjusts loops as well: <block 1> while cond: <block 2> return retval is converted to: <block 1> while not do_return and cond: <block 2> do_return = True retval = val
10591057transformtensorflow/tensorflow/python/autograph/converters/return_statements.py392functionEnsure a function has only a single return, at the end.
10601058SingleReturnTesttensorflow/tensorflow/python/autograph/converters/return_statements_test.py28class
10611059SliceTransformertensorflow/tensorflow/python/autograph/converters/slices.py28classConverts slicing operations to their TF counterpart. Currently, relying on the default slice operator that Tensor uses is insufficient, because TensorArray and tensor lists use dedicated index read and write functions.
10621060transformtensorflow/tensorflow/python/autograph/converters/slices.py84function
10631061SliceTesttensorflow/tensorflow/python/autograph/converters/slices_test.py31class
10641062VariableAccessTransformertensorflow/tensorflow/python/autograph/converters/variables.py28classRewrites basic symbol reads. This transformer rewrites variable reads with a "read" operator which allows tracking activity. Example: For a basic statement: a = b + c This is translated to: a = ld(b) + ld(c) Augmented assignment operations also introduce a `ld` operator: a += b The assignment target also receives an operator to properly represent the read: a = ld(a) a += ld(b)
10651063transformtensorflow/tensorflow/python/autograph/converters/variables.py100function
10661064VariablesTesttensorflow/tensorflow/python/autograph/converters/variables_test.py26class
10671065_control_ctxtensorflow/tensorflow/python/autograph/core/ag_ctx.py29function
10681066control_status_ctxtensorflow/tensorflow/python/autograph/core/ag_ctx.py35function
10691067Statustensorflow/tensorflow/python/autograph/core/ag_ctx.py40class
10701068ControlStatusCtxtensorflow/tensorflow/python/autograph/core/ag_ctx.py46classA context that tracks whether autograph is enabled by the user.
10711069NullCtxtensorflow/tensorflow/python/autograph/core/ag_ctx.py66classHelper substitute for contextlib.nullcontext.
10721070_default_control_status_ctxtensorflow/tensorflow/python/autograph/core/ag_ctx.py76function
10731071Ruletensorflow/tensorflow/python/autograph/core/config_lib.py27classBase class for conversion rules.
10741072Actiontensorflow/tensorflow/python/autograph/core/config_lib.py38class
10751073DoNotConverttensorflow/tensorflow/python/autograph/core/config_lib.py44classIndicates that this module should be not converted.
10761074Converttensorflow/tensorflow/python/autograph/core/config_lib.py56classIndicates that this module should be converted.
10771075Featuretensorflow/tensorflow/python/autograph/core/converter.py83classThis enumeration represents optional conversion options. These conversion options are experimental. They are subject to change without notice and offer no guarantees. _Example Usage_ ```python optionals= tf.autograph.experimental.Feature.EQUALITY_OPERATORS @tf.function(experimental_autograph_options=optionals) def f(i): if i == 0: # EQUALITY_OPERATORS allows the use of == here. tf.print('i is zero') ``` Attributes: ALL: Enable all features. AUTO_CONTROL_DEPS: Insert of control dependencies in the generated code. ASSERT_STATEMENTS: Convert Tensor-dependent assert statements to tf.Assert. BUILTIN_FUNCTIONS: Convert builtin functions applied to Tensors to their TF counterparts. EQUALITY_OPERATORS: Whether to convert the comparison operators, like equality. This is soon to be deprecated as support is being added to the Tensor class. LISTS: Convert list idioms, like initializers, slices, append, etc. NAME_SCOPES: Insert name scopes that name ops according to context, like the function they were defined in.
10781076ConversionOptionstensorflow/tensorflow/python/autograph/core/converter.py138classImmutable container for global conversion flags. Attributes: recursive: bool, whether to recursively convert any user functions or classes that the converted function may use. user_requested: bool, whether the conversion was explicitly requested by the user, as opposed to being performed as a result of other logic. This value always auto-resets resets to False in child conversions. optional_features: Union[Feature, Set[Feature]], controls the use of optional features in the conversion process. See Feature for available options.
10791077ProgramContexttensorflow/tensorflow/python/autograph/core/converter.py236classProgramContext keeps track of converting function hierarchies. Attributes: options: ConversionOptions autograph_module: Deprecated. Do not use.
10801078Basetensorflow/tensorflow/python/autograph/core/converter.py249classAll converters should inherit from this class. Attributes: ctx: EntityContext
10811079TestConvertertensorflow/tensorflow/python/autograph/core/converter_test.py32class
10821080ConversionOptionsTesttensorflow/tensorflow/python/autograph/core/converter_test.py36class
10831081ConverterBaseTesttensorflow/tensorflow/python/autograph/core/converter_test.py64class
10841082allowlisttensorflow/tensorflow/python/autograph/core/converter_testing.py35functionHelper that marks a callable as whtelitisted.
10851083is_inside_generated_codetensorflow/tensorflow/python/autograph/core/converter_testing.py47functionTests whether the caller is generated code. Implementation-specific.
10861084TestingTranspilertensorflow/tensorflow/python/autograph/core/converter_testing.py66classTesting version that only applies given transformations.
10871085TestCasetensorflow/tensorflow/python/autograph/core/converter_testing.py98classBase class for unit tests in this module. Contains relevant utilities.
10881086FunctionScopetensorflow/tensorflow/python/autograph/core/function_wrappers.py33classContext manager that wraps the body of a converted function. This context manager handles various operations related to the scope of a function: * optional TF name scopes - these name scopes match the name of the function, for easy visualization in tensorBoard; * optional automatic control dependencies - this adds the same mechanism for control dependencies that is used by `@tf.function`; it can be optionally enabled when using `tf.autograph.to_graph`; * tracking of autograph conversion state (whether it's enabled by the user, conversion options;
10891087with_function_scopetensorflow/tensorflow/python/autograph/core/function_wrappers.py114functionInline version of the FunctionScope context manager.
10901088FunctionWrappersTesttensorflow/tensorflow/python/autograph/core/function_wrappers_test.py29class
10911089UnsupportedFeaturesCheckertensorflow/tensorflow/python/autograph/core/unsupported_features_checker.py26classQuick check for Python features we know we don't support. Any features detected will cause AutoGraph to not compile a function.
10921090verifytensorflow/tensorflow/python/autograph/core/unsupported_features_checker.py60function
10931091is_autograph_strict_conversion_modetensorflow/tensorflow/python/autograph/impl/api.py72function
10941092AutoGraphErrortensorflow/tensorflow/python/autograph/impl/api.py82classBase class for all AutoGraph exceptions.
10951093ConversionErrortensorflow/tensorflow/python/autograph/impl/api.py87classRaised during the conversion process.
10961094StagingErrortensorflow/tensorflow/python/autograph/impl/api.py92classRaised during the staging (i.e. Python execution) of converted code.
10971095_ErrorMetadatatensorflow/tensorflow/python/autograph/impl/api.py97classAutoGraph-specific error metadata. See base class.
10981096_attach_error_metadatatensorflow/tensorflow/python/autograph/impl/api.py146functionAugments an error with the metadata necessary for rewrite.
10991097StackTraceMappertensorflow/tensorflow/python/autograph/impl/api.py166classRemaps generated code to code it originated from.
11001098PyToTFtensorflow/tensorflow/python/autograph/impl/api.py203classThe TensorFlow AutoGraph transformer.
11011099_convert_actualtensorflow/tensorflow/python/autograph/impl/api.py275functionApplies AutoGraph to entity.
11021100autograph_artifacttensorflow/tensorflow/python/autograph/impl/api.py298function
11031101is_autograph_artifacttensorflow/tensorflow/python/autograph/impl/api.py303function
11041102converted_calltensorflow/tensorflow/python/autograph/impl/api.py307functionConverts a function call inline. For internal use only. Note: The argument list is optimized for readability of generated code, which may look like this: ag__.converted_call(f, (arg1, arg2), None, fscope) ag__.converted_call(f, (), dict(arg1=val1, **kwargs), fscope) ag__.converted_call(f, (arg1, arg2) + varargs, dict(**kwargs), lscope) Args: f: The function to convert. args: Tuple, the original positional arguments of f kwargs: Optional[Dict], the original keyword arguments of f caller_fn_scope: Optional[function_wrappers.FunctionScope], the function scope of the converted function in which this call was originally made. options: Optional[converter.ConversionOptions], conversion options. If not specified, the value of caller_fn_scope.callopts is used. Either options or caller_fn_scope must be present. Returns: Any, the result of executing a possibly-converted `f` with the given arguments.
11051103_call_unconvertedtensorflow/tensorflow/python/autograph/impl/api.py466functionCalls the original function without converting with AutoGraph.
11061104_fall_back_unconvertedtensorflow/tensorflow/python/autograph/impl/api.py479functionFalls back to calling the function unconverted, in case of error.
11071105tf_converttensorflow/tensorflow/python/autograph/impl/api.py506functionDecorator that applies AutoGraph to a function. Use in internal APIs. This API is suitable for high order functions internal to the TensorFlow API, and more generally any function to which Autograph is not applied. Guidance: convert was a decorator meant for use directly by developers, and will be soon deprecated in favor of tf.function. tf_convert is to be called from high order functions internal to TF. Args: f: Callable. ctx: ag_ctx.ControlStatusCtx, the Autograph context in which `f` is used. convert_by_default: bool, whether to use AutoGraph when the context doesn't specify. user_requested: bool, whether to ignore the conversion allowlist. See ConversionOptions.user_requested. Returns: Either `f or the converted version of `f`.
11081106call_with_unspecified_conversion_statustensorflow/tensorflow/python/autograph/impl/api.py565functionDecorator that resets the conversion context to the unspecified status.
11091107_log_callargstensorflow/tensorflow/python/autograph/impl/api.py577functionLogging helper.
11101108do_not_converttensorflow/tensorflow/python/autograph/impl/api.py599functionDecorator that suppresses the conversion of a function. Args: func: function to decorate. Returns: If `func` is not None, returns a `Callable` which is equivalent to `func`, but is not converted by AutoGraph. If `func` is None, returns a decorator that, when invoked with a single `func` argument, returns a `Callable` equivalent to the above case.
11111109converttensorflow/tensorflow/python/autograph/impl/api.py626functionDecorator that compiles a function to use TensorFlow ops. The decorator is dynamic - it recompiles the target whenever the decorated function is called. This means the parameter values are known at conversion. It also means that repeated calls with different types of parameters will be correctly processed. Args: recursive: bool, whether to recursively convert any functions or classes that the converted function may use. optional_features: converted.Feature, allows toggling optional or experimental features. When set to None, only the core features are enabled. user_requested: bool, whether this is a function that the user explicitly asked to be converted. See ConversionOptions.user_requested. conversion_ctx: Optional ag_ctx.ControlStatusCtx, the Autograph context in which `f` is used. Returns: Callable, a decorator that converts the given function into an equivalent function that uses TensorFlow ops.
11121110to_graphtensorflow/tensorflow/python/autograph/impl/api.py682functionConverts a Python entity into a TensorFlow graph. Also see: `tf.autograph.to_code`, `tf.function`. Unlike `tf.function`, `to_graph` is a low-level transpiler that converts Python code to TensorFlow graph code. It does not implement any caching, variable management or create any actual ops, and is best used where greater control over the generated TensorFlow graph is desired. Another difference from `tf.function` is that `to_graph` will not wrap the graph into a TensorFlow function or a Python callable. Internally, `tf.function` uses `to_graph`. Example usage: >>> def f(x): ... if x > 0: ... y = x * x ... else: ... y = -x ... return y ... >>> converted_f = to_graph(f) >>> x = tf.constant(2) >>> converted_f(x) # converted_foo is like a TensorFlow Op. <tf.Tensor: shape=(), dtype=int32, numpy=4> Supported Python entities include: * functions * classes * object methods Functions are converted into new functions with converted code. Classes are converted by generating a new class whose methods use converted code. Methods are converted into unbound function that have an additional first argument called `self`. For a tutorial, see the [tf.function and AutoGraph guide](https://www.tensorflow.org/guide/function). For more detailed information, see the [AutoGraph reference documentation](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/index.md). Args: entity: Python callable or class to convert. recursive: Whether to recursively convert any functions that the converted function may call. experimental_optional_features: `None`, a tuple of, or a single `tf.autograph.experimental.Feature` value. Returns: Same as `entity`, the converted Python function or class. Raises: ValueError: If the entity could not be converted.
11131111to_graph_v1tensorflow/tensorflow/python/autograph/impl/api.py754functionConverts a Python entity into a TensorFlow graph. Also see: `tf.autograph.to_code`, `tf.function`. Unlike `tf.function`, `to_graph` is a low-level transpiler that converts Python code to TensorFlow graph code. It does not implement any caching, variable management or create any actual ops, and is best used where greater control over the generated TensorFlow graph is desired. Another difference from `tf.function` is that `to_graph` will not wrap the graph into a TensorFlow function or a Python callable. Internally, `tf.function` uses `to_graph`. _Example Usage_ ```python def foo(x): if x > 0: y = x * x else: y = -x return y converted_foo = to_graph(foo) x = tf.constant(1) y = converted_foo(x) # converted_foo is a TensorFlow Op-like. assert is_tensor(y) ``` Supported Python entities include: * functions * classes * object methods Functions are converted into new functions with converted code. Classes are converted by generating a new class whose methods use converted code. Methods are converted into unbound function that have an additional first argument called `self`. Args: entity: Python callable or class to convert. recursive: Whether to recursively convert any functions that the converted function may call. arg_values: Deprecated. arg_types: Deprecated. experimental_optional_features: `None`, a tuple of, or a single `tf.autograph.experimental.Feature` value. Returns: Same as `entity`, the converted Python function or class. Raises: ValueError: If the entity could not be converted.
11141112to_code_v1tensorflow/tensorflow/python/autograph/impl/api.py825functionReturns the source code generated by AutoGraph, as a string. Example usage: >>> def f(x): ... if x < 0: ... x = -x ... return x >>> tf.autograph.to_code(f) "...def tf__f(x):..." Also see: `tf.autograph.to_graph`. Note: If a function has been decorated with `tf.function`, pass its underlying Python function, rather than the callable that `tf.function creates: >>> @tf.function ... def f(x): ... if x < 0: ... x = -x ... return x >>> tf.autograph.to_code(f.python_function) "...def tf__f(x):..." Args: entity: Python callable or class. recursive: Whether to recursively convert any functions that the converted function may call. arg_values: Deprecated. arg_types: Deprecated. indentation: Deprecated. experimental_optional_features: `None`, a tuple of, or a single `tf.autograph.experimental.Feature` value. Returns: The converted code as string.
11151113to_codetensorflow/tensorflow/python/autograph/impl/api.py879functionReturns the source code generated by AutoGraph, as a string. Example usage: >>> def f(x): ... if x < 0: ... x = -x ... return x >>> tf.autograph.to_code(f) "...def tf__f(x):..." Also see: `tf.autograph.to_graph`. Note: If a function has been decorated with `tf.function`, pass its underlying Python function, rather than the callable that `tf.function creates: >>> @tf.function ... def f(x): ... if x < 0: ... x = -x ... return x >>> tf.autograph.to_code(f.python_function) "...def tf__f(x):..." Args: entity: Python callable or class to convert. recursive: Whether to recursively convert any functions that the converted function may call. experimental_optional_features: `None`, a tuple of, or a single `tf.autograph.experimental.Feature` value. Returns: The converted code as string.
11161114TestResourcetensorflow/tensorflow/python/autograph/impl/api_test.py64class
11171115ApiTesttensorflow/tensorflow/python/autograph/impl/api_test.py70class
11181116_is_of_known_loaded_moduletensorflow/tensorflow/python/autograph/impl/conversion.py37function
11191117_is_known_loaded_typetensorflow/tensorflow/python/autograph/impl/conversion.py46functionTests whether the function or method is an instance of a known type.
11201118is_unsupportedtensorflow/tensorflow/python/autograph/impl/conversion.py73functionChecks whether an entity is supported by AutoGraph at all.
11211119is_allowlistedtensorflow/tensorflow/python/autograph/impl/conversion.py116functionChecks whether an entity is allowed for use in graph mode. Examples of allowed entities include all members of the tensorflow package. Args: o: A Python entity. check_call_override: Reserved for internal use. When set to `False`, it disables the rule according to which classes are allowed if their __call__ method is allowed. allow_namedtuple_subclass: Reserved for internal use. When `True`, namedtuple subclasses are not allowed. Returns: Boolean
11221120is_in_allowlist_cachetensorflow/tensorflow/python/autograph/impl/conversion.py221function
11231121cache_allowlistedtensorflow/tensorflow/python/autograph/impl/conversion.py229function
11241122ConversionTesttensorflow/tensorflow/python/autograph/impl/conversion_test.py39class
11251123set_element_typetensorflow/tensorflow/python/autograph/lang/directives.py33functionIndicates that the entity is expected hold items of specified type/shape. The staged TensorFlow ops will reflect and assert this data type. Ignored otherwise. Args: entity: The entity to annotate. dtype: TensorFlow dtype value to assert for entity. shape: Optional shape to assert for entity.
11261124set_loop_optionstensorflow/tensorflow/python/autograph/lang/directives.py50functionSpecifies additional arguments to be passed to the enclosing while_loop. The parameters apply to and only to the immediately enclosing loop. It only has effect if the loop is staged as a TF while_loop; otherwise the parameters have no effect. Usage: >>> @tf.function(autograph=True) ... def f(): ... n = 0 ... for i in tf.range(10): ... tf.autograph.experimental.set_loop_options(maximum_iterations=3) ... n += 1 ... return n >>> @tf.function(autograph=True) ... def f(): ... v = tf.constant((0,)) ... for i in tf.range(3): ... tf.autograph.experimental.set_loop_options( ... shape_invariants=[(v, tf.TensorShape([None]))] ... ) ... v = tf.concat((v, [i]), 0) ... return v Also see tf.while_loop. Args: parallel_iterations: The maximum number of iterations allowed to run in parallel at any given time. Note that this does not guarantee parallel execution. swap_memory: Whether to store intermediate values needed for gradients on the CPU instead of GPU. maximum_iterations: Allows limiting the total number of iterations executed by the loop. shape_invariants: Allows controlling the argument with the same name passed to tf.while_loop. Unlike tf.while_loop, this is a list of `(tensor, shape)` pairs.
11271125_validate_list_constructortensorflow/tensorflow/python/autograph/lang/special_functions.py31functionValidates the inputs of tensor_list.
11281126match_staging_leveltensorflow/tensorflow/python/autograph/lang/special_functions.py50functionCasts a value to be staged at the same level as another.
11291127tensor_listtensorflow/tensorflow/python/autograph/lang/special_functions.py57functionCreates an tensor list and populates it with the given elements. This function provides a more uniform access to tensor lists and tensor arrays, and allows optional initialization. Note: this function is a simplified wrapper. If you need greater control, it is recommended to use the underlying implementation directly. Args: elements: Iterable[tf.Tensor, ...], the elements to initially fill the list with element_dtype: Optional[tf.DType], data type for the elements in the list; required if the list is empty element_shape: Optional[tf.TensorShape], shape for the elements in the list; required if the list is empty use_tensor_array: bool, whether to use the more compatible but restrictive tf.TensorArray implementation Returns: Union[tf.Tensor, tf.TensorArray], the new list. Raises: ValueError: for invalid arguments
11301128stacktensorflow/tensorflow/python/autograph/lang/special_functions.py92functionStacks the input, if it admits the notion of stacking. For example, a list of tensors can be stacked into a larger tensor. This function is similar to tf.stack, but it accepts non-lists and lists of non-tensors as arguments. In the latter case, the function does nothing. Args: list_or_tensor: Any element_dtype: tf.DType, optional dtypedtype for the elements in the list. Required if the input is stackable, and the list is untyped. strict: bool, if True an error is raised if the input is not stackable. Otherwise the function is a no-op. Returns: Any, if the input is stackable, the result will be a tf.Tensor. Otherwise, if strict=False, the result will be list_or_tensor. Raises: ValueError: if strict=True and the input is not stackable.
11311129SpecialFunctionsTesttensorflow/tensorflow/python/autograph/lang/special_functions_test.py31class
11321130if_exptensorflow/tensorflow/python/autograph/operators/conditional_expressions.py27function
11331131_tf_if_exptensorflow/tensorflow/python/autograph/operators/conditional_expressions.py34functionOverload of if_exp that stages a TF cond.
11341132_py_if_exptensorflow/tensorflow/python/autograph/operators/conditional_expressions.py55function
11351133_basic_exprtensorflow/tensorflow/python/autograph/operators/conditional_expressions_test.py29function
11361134IfExpTesttensorflow/tensorflow/python/autograph/operators/conditional_expressions_test.py38class
11371135_verify_loop_init_varstensorflow/tensorflow/python/autograph/operators/control_flow.py102functionEnsures that all values in the state are defined when entering a loop.
11381136_is_subshapetensorflow/tensorflow/python/autograph/operators/control_flow.py117functionReturns True if left shape is at least as specific as right shape.
11391137_verify_single_loop_vartensorflow/tensorflow/python/autograph/operators/control_flow.py134functionVerifies whether the initial, entry and exit values are consistent.
11401138_verify_tf_loop_varstensorflow/tensorflow/python/autograph/operators/control_flow.py191functionVerifies loop variables for consistency.
11411139verify_single_cond_vartensorflow/tensorflow/python/autograph/operators/control_flow.py233functionVerifies whether body_var and orelse_var are consistent.
11421140_verify_tf_cond_branch_varstensorflow/tensorflow/python/autograph/operators/control_flow.py263functionVerifies variables output by a conditional branch for consistency.
11431141_verify_tf_cond_varstensorflow/tensorflow/python/autograph/operators/control_flow.py276functionVerifies variables manipulated by a conditional for consistency.
11441142for_stmttensorflow/tensorflow/python/autograph/operators/control_flow.py291functionFunctional form of a for statement. The loop operates on a state, which includes all symbols that are variant across loop iterations, excluding the variables local to the loop. For example, given the loop below that calculates the geometric and arithmetic means or some numbers: ``` geo_mean = 1 arith_mean = 0 for i in range(n): a = numbers[i] geo_mean *= a arith_mean += a ``` The state is represented by the variables geo_mean and arith_mean. The `extra_test`, `body`, `get_state` and `set_state` functions must bind to the original `geo_mean` and `arith_mean` symbols, using `nonlocal`. The inputs and outputs of the callables representing the loop blocks are not explicit - instead, these functions must use nonlocal/global for side effects. The inputs and outputs are instead controlled by the set_state/get_state functions. Args: iter_: The entity being iterated over. extra_test: Callable with boolean return type. An additional loop condition. body: Callable representing the actual loop body. get_state: Additional callable which can capture additional state (such as the values of composite symbols). This is only useful when staging the loop. set_state: Additional callable which save values captured by get_state back into the Python environment. This is only useful when staging the loop. symbol_names: Tuple containing names of the loop variables returned by get_state. opts: Optional dict of extra loop parameters. Returns: Tuple containing the final state.
11451143_py_for_stmttensorflow/tensorflow/python/autograph/operators/control_flow.py371functionOverload of for_stmt that executes a Python for loop.
11461144_known_len_tf_for_stmttensorflow/tensorflow/python/autograph/operators/control_flow.py400functionOverload of for_stmt that iterates over TF entities that admit a length.
11471145_tf_ragged_for_stmttensorflow/tensorflow/python/autograph/operators/control_flow.py447functionOverload of for_stmt that iterates over TF ragged tensors.
11481146_tf_range_for_stmttensorflow/tensorflow/python/autograph/operators/control_flow.py494functionOverload of for_stmt that iterates over a TF range (and elides it).
11491147_tf_iterator_for_stmttensorflow/tensorflow/python/autograph/operators/control_flow.py555functionOverload of for_stmt that iterates over TF Iterators. See for_loop.
11501148_general_purpose_scantensorflow/tensorflow/python/autograph/operators/control_flow.py615functionVariant of Dataset.scan with semantics of general-purpose computation.
11511149_tf_dataset_for_stmttensorflow/tensorflow/python/autograph/operators/control_flow.py629functionOverload of _dataset_for_stmt with early stopping. See for_stmt.
11521150_tf_distributed_iterable_for_stmttensorflow/tensorflow/python/autograph/operators/control_flow.py700functionOverload of for_stmt that iterates over TF distributed datasets.
11531151while_stmttensorflow/tensorflow/python/autograph/operators/control_flow.py727functionFunctional form of a while statement. The loop operates on a so-called state, which includes all symbols that are variant across loop iterations. In what follows we refer to state as either a tuple of entities that represent an actual state, or a list of arguments of the corresponding types. The inputs and outputs of the callables representing the loop blocks are not explicit - instead, these functions must use nonlocal/global for side effects. The inputs and outputs are instead controlled by the set_state/get_state functions. Args: test: Callable with boolean return type. The loop condition. body: Callable representing the actual loop body. get_state: Additional callable which can capture additional state (such as the values of composite symbols). This is only useful when staging the loop. set_state: Additional callable which save values captured by get_state back into the Python environment. This is only useful when staging the loop. symbol_names: Tuple containing the names of all loop variables. opts: Optional dict of extra loop parameters. Returns: Tuple containing the final state.
11541152_PythonLoopCheckertensorflow/tensorflow/python/autograph/operators/control_flow.py777classVerifies Python loops for TF-specific limits.
11551153_py_while_stmttensorflow/tensorflow/python/autograph/operators/control_flow.py851functionOverload of while_stmt that executes a Python while loop.
11561154_shape_invariants_mapping_to_positional_listtensorflow/tensorflow/python/autograph/operators/control_flow.py872function
11571155_tf_while_stmttensorflow/tensorflow/python/autograph/operators/control_flow.py882functionOverload of while_stmt that stages a TF while_stmt.
11581156if_stmttensorflow/tensorflow/python/autograph/operators/control_flow.py915functionFunctional form of an if statement. The conditional operates on a state, which includes all symbols whose values are a function of the branch taken. For example, given the code below that calculates the abs function: ``` x = 1 if x > 0: x = -x ``` The state is represented by the variable `x`. The `body, `orelse` and `set_state` functions must bind to the original `x` symbol, using `nonlocal`. The inputs and outputs of the callables representing the loop blocks are not explicit - instead, these functions must use nonlocal/global for side effects. The inputs and outputs are instead controlled by the set_state/get_state functions. Args: cond: Boolean. body: Callable representing the main block of the conditional. orelse: Callable representing the else block of the conditional. get_state: Function that returns a tuple containing the values of all composite symbols modified within the conditional. This allows access to state that branches may mutate through side effects. This function is not needed and should not be called when dispatching to code matching Python's default semantics. This is useful for checkpointing to avoid unintended side-effects when staging requires evaluating all code-paths. set_state: Function to set the values of all composite symbols modified within the conditional. This is the complement to get_state, used to restore checkpointed values. The single argument a tuple containing values for each composite symbol that may be modified in a branch of the conditional. The is usually the result of a call to get_state. symbol_names: Tuple containing basic loop var names. nouts: Number of variables output by the statement. Vars which are not outputs will not be passed through staged control flow such as tf.cond. This includes variables that are defined before the conditional, but are not used after it.
11591157_tf_if_stmttensorflow/tensorflow/python/autograph/operators/control_flow.py965functionOverload of if_stmt that stages a TF cond.
11601158_py_if_stmttensorflow/tensorflow/python/autograph/operators/control_flow.py1011functionOverload of if_stmt that executes a Python if statement.
11611159_disallow_undefs_into_looptensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py104functionEnsures that all values in the state are defined when entering a loop.
11621160_is_subshapetensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py120functionReturns True if left shape is at least as specific as right shape.
11631161_verify_single_loop_vartensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py137functionVerifies whether the initial, entry and exit values are consistent.
11641162_verify_tf_loop_varstensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py191functionVerifies loop variables for consistency.
11651163_verify_single_cond_vartensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py223functionVerifies whether body_var and orelse_var are consistent.
11661164_verify_tf_cond_varstensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py248functionVerifies variables manipulated by a conditional for consistency.
11671165for_stmttensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py279functionFunctional form of a for statement. The loop operates on a state, which includes all symbols that are variant across loop iterations, excluding the iterate as well as the variables local to the loop. For example, given the loop below that calculates the geometric and arithmetic means or some numbers: geo_mean = 1 arith_mean = 0 for i in range(n): a = numbers[i] geo_mean *= a arith_mean += a The state is represented by the variables geo_mean and arith_mean. The argument for initial_state may contain the tuple (1, 0), the body will include the arguments geo_mean and arith_mean and will return a tuple representing the new values for geo_mean and respectively arith_mean. Args: iter_: The entity being iterated over. extra_test: Callable with the state as arguments, and boolean return type. An additional loop condition. body: Callable with the iterate and the state as arguments, and state as return type. The actual loop body. get_state: Additional callable which can capture additional state (such as the values of composite symbols). This is only useful when staging the loop. set_state: Additional callable which save values captured by get_state back into the Python environment. This is only useful when staging the loop. init_vars: Tuple containing the initial state. basic_symbol_names: Tuple containing basic loop var names. composite_symbol_names: Tuple containing composite loop var names. opts: Optional dict of extra loop parameters. Returns: Tuple containing the final state.
11681166_py_for_stmttensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py364functionOverload of for_stmt that executes a Python for loop.
11691167_known_len_tf_for_stmttensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py383functionOverload of for_stmt that iterates over TF entities that admit a length.
11701168_tf_ragged_for_stmttensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py446functionOverload of for_stmt that iterates over TF ragged tensors.
11711169_tf_range_for_stmttensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py509functionOverload of for_stmt that iterates over a TF range (and elides it).
11721170_tf_iterator_for_stmttensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py572functionOverload of for_stmt that iterates over TF Iterators. See for_loop.
11731171_tf_dataset_for_stmttensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py636functionOverload of for_stmt that iterates over TF Datasets.
11741172_general_purpose_scantensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py653functionVariant of Dataset.scan with semantics of general-purpose computation.
11751173_dataset_for_stmt_with_extra_testtensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py667functionOverload of _dataset_for_stmt with early stopping. See for_stmt.
11761174_dataset_for_stmt_no_extra_testtensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py725functionOverload of _dataset_for_stmt without early stopping. See for_stmt.
11771175_tf_distributed_dataset_for_stmttensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py794functionOverload of for..in statement that iterates over the input.
11781176while_stmttensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py817functionFunctional form of a while statement. The loop operates on a so-called state, which includes all symbols that are variant across loop iterations. In what follows we refer to state as either a tuple of entities that represent an actual state, or a list of arguments of the corresponding types. Args: test: Callable with the state as arguments, and boolean return type. The loop condition. body: Callable with the state as arguments, and state as return type. The actual loop body. get_state: Additional callable which can capture additional state (such as the values of composite symbols). This is only useful when staging the loop. set_state: Additional callable which save values captured by get_state back into the Python environment. This is only useful when staging the loop. init_vars: Tuple containing the initial state. basic_symbol_names: Tuple containing basic loop var names. composite_symbol_names: Tuple containing composite loop var names. opts: Optional dict of extra loop parameters. Returns: Tuple containing the final state.
11791177_shape_invariants_mapping_to_positional_listtensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py873function
11801178_tf_while_stmttensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py883functionOverload of while_stmt that stages a TF while_stmt.
11811179_PythonLoopCheckertensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py925classVerifies Python loops for TF-specific limits.
11821180_py_while_stmttensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py987functionOverload of while_stmt that executes a Python while loop.
11831181if_stmttensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py1008functionFunctional form of an if statement. Args: cond: Boolean. body: Callable with no arguments, and outputs of the positive (if) branch as return type. orelse: Callable with no arguments, and outputs of the negative (else) branch as return type. get_state: Function that returns a tuple containing the values of all composite symbols modified within the conditional. This allows access to state that branches may mutate through side effects. This function is not needed and should not be called when dispatching to code matching Python's default semantics. This is useful for checkpointing to avoid unintended side-effects when staging requires evaluating all code-paths. set_state: Function to set the values of all composite symbols modified within the conditional. This is the complement to get_state, used to restore checkpointed values. The single argument a tuple containing values for each composite symbol that may be modified in a branch of the conditional. The is usually the result of a call to get_state. basic_symbol_names: Tuple containing basic loop var names. composite_symbol_names: Tuple containing composite loop var names. Returns: Tuple containing the statement outputs.
11841182tf_if_stmttensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py1048functionOverload of if_stmt that stages a TF cond.
11851183_isolate_statetensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py1088functionWraps func to (best-effort) isolate state mutations that func may do. The simplest example of state mutation is mutation of variables (via e.g. attributes), or modification of globals. This allows us to more safely execute this function without worrying about side effects when the function wasn't normally expected to execute. For example, staging requires that the function is executed ahead of time, and we need to ensure its effects are not observed during normal execution. Args: func: () -> Any get_state: () -> Any, returns the current state set_state: (Any) -> None, resets the state to the specified values. Typically the result of an earlier call to `get_state`. Returns: Tuple[Any, Any], where the first element is the return value of `func`, and the second is the final state values.
11861184_wrap_disallow_undefs_from_condtensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py1121functionWraps conditional branch to disallow returning undefined symbols.
11871185_py_if_stmttensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py1152functionOverload of if_stmt that executes a Python if statement.
11881186ForLoopTesttensorflow/tensorflow/python/autograph/operators/control_flow_test.py49class
11891187WhileLoopTesttensorflow/tensorflow/python/autograph/operators/control_flow_test.py540class
11901188IfStmtTesttensorflow/tensorflow/python/autograph/operators/control_flow_test.py775class
11911189new_listtensorflow/tensorflow/python/autograph/operators/data_structures.py36functionThe list constructor. Args: iterable: Optional elements to fill the list with. Returns: A list-like object. The exact return value depends on the initial elements.
11921190tf_tensor_array_newtensorflow/tensorflow/python/autograph/operators/data_structures.py57functionOverload of new_list that stages a Tensor list creation.
11931191tf_tensor_list_newtensorflow/tensorflow/python/autograph/operators/data_structures.py107functionOverload of new_list that stages a Tensor list creation.
11941192_py_list_newtensorflow/tensorflow/python/autograph/operators/data_structures.py166functionOverload of new_list that creates a Python list.
11951193list_appendtensorflow/tensorflow/python/autograph/operators/data_structures.py171functionThe list append function. Note: it is unspecified where list_ will be mutated or not. If list_ is a TensorFlow entity, it will not be typically mutated. If list_ is a plain list, it will be. In general, if the list is mutated then the return value should point to the original entity. Args: list_: An entity that supports append semantics. x: The element to append. Returns: Same as list_, after the append was performed. Raises: ValueError: if list_ is not of a known list-like type.
11961194_tf_tensor_list_appendtensorflow/tensorflow/python/autograph/operators/data_structures.py202functionOverload of list_append that stages a Tensor list write.
11971195_tf_tensorarray_appendtensorflow/tensorflow/python/autograph/operators/data_structures.py218functionOverload of list_append that stages a TensorArray write.
11981196_py_list_appendtensorflow/tensorflow/python/autograph/operators/data_structures.py223functionOverload of list_append that executes a Python list append.
11991197ListPopOptstensorflow/tensorflow/python/autograph/operators/data_structures.py230class
12001198list_poptensorflow/tensorflow/python/autograph/operators/data_structures.py235functionThe list pop function. Note: it is unspecified where list_ will be mutated or not. If list_ is a TensorFlow entity, it will not be typically mutated. If list_ is a plain list, it will be. In general, if the list is mutated then the return value should point to the original entity. Args: list_: An entity that supports pop semantics. i: Optional index to pop from. May be None. opts: A ListPopOpts. Returns: Tuple (x, out_list_): out_list_: same as list_, after the removal was performed. x: the removed element value. Raises: ValueError: if list_ is not of a known list-like type or the operation is not supported for that type.
12011199_tf_tensor_list_poptensorflow/tensorflow/python/autograph/operators/data_structures.py272functionOverload of list_pop that stages a Tensor list pop.
12021200_py_list_poptensorflow/tensorflow/python/autograph/operators/data_structures.py289functionOverload of list_pop that executes a Python list append.
12031201ListStackOptstensorflow/tensorflow/python/autograph/operators/data_structures.py299class
12041202list_stacktensorflow/tensorflow/python/autograph/operators/data_structures.py305functionThe list stack function. This does not have a direct correspondent in Python. The closest idiom to this is tf.append or np.stack. It's different from those in the sense that it accepts a Tensor list, rather than a list of tensors. It can also accept TensorArray. When the target is anything else, the dispatcher will rely on ctx.original_call for fallback. Args: list_: An entity that supports append semantics. opts: A ListStackOpts object. Returns: The output of the stack operation, typically a Tensor.
12051203_tf_tensorarray_stacktensorflow/tensorflow/python/autograph/operators/data_structures.py335functionOverload of list_stack that stages a TensorArray stack.
12061204_tf_tensor_list_stacktensorflow/tensorflow/python/autograph/operators/data_structures.py340functionOverload of list_stack that stages a Tensor list write.
12071205_py_list_stacktensorflow/tensorflow/python/autograph/operators/data_structures.py348functionOverload of list_stack that executes a Python list append.
12081206ListTesttensorflow/tensorflow/python/autograph/operators/data_structures_test.py31class
12091207DispatchContexttensorflow/tensorflow/python/autograph/operators/dispatch_context.py27classAllows passing additional parameters to the specific implementations. Attributes: options: Optional dict of extra arguments that may be required by specific implementations.
12101208assert_stmttensorflow/tensorflow/python/autograph/operators/exceptions.py26functionFunctional form of an assert statement. This follows the semantics of the Python assert statement, however the concrete implementations may deviate from it. See the respective implementation for details. In general, the assert statement should not be used for control flow. Furthermore, it is encouraged that the assertion expressions should not have side effects. Args: expression1: Any expression2: Callable[[], Any], returns the expression to include in the error message when expression1 evaluates to False. When expression1 is True, the result of expression2 will not be evaluated, however, expression2 itself may be evaluated in some implementations. Returns: Any, implementation-dependent. Raises: ValueError: if any arguments are illegal.
12111209_tf_assert_stmttensorflow/tensorflow/python/autograph/operators/exceptions.py62functionOverload of assert_stmt that stages a TF Assert. This implementation deviates from Python semantics as follows: (1) the assertion is verified regardless of the state of __debug__ (2) on assertion failure, the graph execution will fail with tensorflow.errors.ValueError, rather than AssertionError. Args: expression1: tensorflow.Tensor, must evaluate to a tf.bool scalar expression2: Callable[[], Union[tensorflow.Tensor, List[tensorflow.Tensor]]] Returns: tensorflow.Operation
12121210_py_assert_stmttensorflow/tensorflow/python/autograph/operators/exceptions.py83functionOverload of assert_stmt that executes a Python assert statement.
12131211ExceptionsTesttensorflow/tensorflow/python/autograph/operators/exceptions_test.py28class
12141212not_tensorflow/tensorflow/python/autograph/operators/logical.py26functionFunctional form of "not".
12151213_tf_nottensorflow/tensorflow/python/autograph/operators/logical.py33functionImplementation of the "not_" operator for TensorFlow.
12161214_py_nottensorflow/tensorflow/python/autograph/operators/logical.py38functionDefault Python implementation of the "not_" operator.
12171215and_tensorflow/tensorflow/python/autograph/operators/logical.py43functionFunctional form of "and". Uses lazy evaluation semantics.
12181216_tf_lazy_andtensorflow/tensorflow/python/autograph/operators/logical.py51functionLazy-eval equivalent of "and" for Tensors.
12191217_py_lazy_andtensorflow/tensorflow/python/autograph/operators/logical.py57functionLazy-eval equivalent of "and" in Python.
12201218or_tensorflow/tensorflow/python/autograph/operators/logical.py62functionFunctional form of "or". Uses lazy evaluation semantics.
12211219_tf_lazy_ortensorflow/tensorflow/python/autograph/operators/logical.py70functionLazy-eval equivalent of "or" for Tensors.
12221220_py_lazy_ortensorflow/tensorflow/python/autograph/operators/logical.py76functionLazy-eval equivalent of "or" in Python.
12231221eqtensorflow/tensorflow/python/autograph/operators/logical.py81functionFunctional form of "equal".
12241222_tf_equaltensorflow/tensorflow/python/autograph/operators/logical.py88functionOverload of "equal" for Tensors.
12251223_py_equaltensorflow/tensorflow/python/autograph/operators/logical.py93functionOverload of "equal" that falls back to Python's default implementation.
12261224not_eqtensorflow/tensorflow/python/autograph/operators/logical.py98functionFunctional form of "not-equal".
12271225LogicalOperatorsTesttensorflow/tensorflow/python/autograph/operators/logical_test.py27class
12281226overload_oftensorflow/tensorflow/python/autograph/operators/py_builtins.py65function
12291227_find_originating_frametensorflow/tensorflow/python/autograph/operators/py_builtins.py71functionLocates the frame in which `caller_fn_scope` was defined.
12301228locals_in_original_contexttensorflow/tensorflow/python/autograph/operators/py_builtins.py92functionExecutes the locals function in the context of a specified function.
12311229globals_in_original_contexttensorflow/tensorflow/python/autograph/operators/py_builtins.py97functionExecutes the locals function in the context of a specified function.
12321230eval_in_original_contexttensorflow/tensorflow/python/autograph/operators/py_builtins.py102functionExecutes the eval function in the context of a specified function.
12331231super_in_original_contexttensorflow/tensorflow/python/autograph/operators/py_builtins.py117functionExecutes the super function in the context of a specified function. See https://docs.python.org/3/library/functions.html#super for the exact details Args: f: Callable, typically the super builtin args: List[Any], the original call arguments caller_fn_scope: Optional[function_wrappers.FunctionScope], the function scope of the converted function in which this call was originally made Returns: The result of calling `f` as if it was called in the frame indicated by `caller_fn_scope`.
12341232abs_tensorflow/tensorflow/python/autograph/operators/py_builtins.py179function
12351233_tf_abstensorflow/tensorflow/python/autograph/operators/py_builtins.py187function
12361234_tf_dataset_abstensorflow/tensorflow/python/autograph/operators/py_builtins.py191function
12371235_py_abstensorflow/tensorflow/python/autograph/operators/py_builtins.py198function
12381236float_tensorflow/tensorflow/python/autograph/operators/py_builtins.py202function
12391237_tf_floattensorflow/tensorflow/python/autograph/operators/py_builtins.py208function
12401238_py_floattensorflow/tensorflow/python/autograph/operators/py_builtins.py215function
12411239int_tensorflow/tensorflow/python/autograph/operators/py_builtins.py219function
12421240_tf_inttensorflow/tensorflow/python/autograph/operators/py_builtins.py225function
12431241_py_inttensorflow/tensorflow/python/autograph/operators/py_builtins.py235function
12441242len_tensorflow/tensorflow/python/autograph/operators/py_builtins.py241function
12451243_tf_tensor_array_lentensorflow/tensorflow/python/autograph/operators/py_builtins.py253function
12461244_tf_tensor_list_lentensorflow/tensorflow/python/autograph/operators/py_builtins.py257function
12471245_tf_tensor_lentensorflow/tensorflow/python/autograph/operators/py_builtins.py261functionOverload of len_ for Tensor arguments.
12481246_tf_dataset_lentensorflow/tensorflow/python/autograph/operators/py_builtins.py294function
12491247_py_lentensorflow/tensorflow/python/autograph/operators/py_builtins.py314function
12501248print_tensorflow/tensorflow/python/autograph/operators/py_builtins.py318functionOverload of the print builtin.
12511249_py_printtensorflow/tensorflow/python/autograph/operators/py_builtins.py334function
12521250_tf_py_func_printtensorflow/tensorflow/python/autograph/operators/py_builtins.py338functionOverload of print_ as a py_func implementation.
12531251range_tensorflow/tensorflow/python/autograph/operators/py_builtins.py360function
12541252_tf_rangetensorflow/tensorflow/python/autograph/operators/py_builtins.py366functionOverload of range_ that generates a TF range tensor.
12551253_py_rangetensorflow/tensorflow/python/autograph/operators/py_builtins.py383function
12561254enumerate_tensorflow/tensorflow/python/autograph/operators/py_builtins.py391function
12571255_tf_dataset_enumeratetensorflow/tensorflow/python/autograph/operators/py_builtins.py401function
12581256_py_enumeratetensorflow/tensorflow/python/autograph/operators/py_builtins.py405function
12591257zip_tensorflow/tensorflow/python/autograph/operators/py_builtins.py409function
12601258_tf_dataset_ziptensorflow/tensorflow/python/autograph/operators/py_builtins.py415function
12611259_py_ziptensorflow/tensorflow/python/autograph/operators/py_builtins.py419function
12621260map_tensorflow/tensorflow/python/autograph/operators/py_builtins.py423function
12631261_tf_dataset_maptensorflow/tensorflow/python/autograph/operators/py_builtins.py429function
12641262_py_maptensorflow/tensorflow/python/autograph/operators/py_builtins.py433function
12651263next_tensorflow/tensorflow/python/autograph/operators/py_builtins.py437function
12661264_verify_spec_compatibletensorflow/tensorflow/python/autograph/operators/py_builtins.py444functionVerifies that a symbol has a type compatible vith a given spec. Here, compatibility is viewed in the general TensorFlow sense: that the dtypes are the same after implicit conversion, if both are tensors. This verifier ensures consistent treatment of types across AutoGraph. Args: input_name: A name to use for `input_` in error messages. spec_name: A name to use for `spec` in error messages. input_: Any, value to verify. spec: TypeSpec that `input_` must be compatible with. Raises: ValueError if the two types have been determined not to be compatible.
12671265_verify_structure_compatibletensorflow/tensorflow/python/autograph/operators/py_builtins.py480functionVerifies that possibly-structured symbol has types compatible vith another. See _verify_spec_compatible for a more concrete meaning of "compatible". Unspec _verify_spec_compatible, which handles singular Tensor-spec objects, verify_structures_compatible can process structures recognized by tf.nest. Args: input_name: A name to use for `input_` in error messages. spec_name: A name to use for `spec` in error messages. input_: Any, value to verify. May, but doesn't need to, be a structure. spec: Any, value that `input_` must be compatible with. May, but doesn't need to, be a structure. Raises: ValueError if the two types have been determined not to be compatible.
12681266next_tf_iteratortensorflow/tensorflow/python/autograph/operators/py_builtins.py509function
12691267next_pytensorflow/tensorflow/python/autograph/operators/py_builtins.py521function
12701268filter_tensorflow/tensorflow/python/autograph/operators/py_builtins.py527function
12711269_tf_dataset_filtertensorflow/tensorflow/python/autograph/operators/py_builtins.py533function
12721270_py_filtertensorflow/tensorflow/python/autograph/operators/py_builtins.py537function
12731271any_tensorflow/tensorflow/python/autograph/operators/py_builtins.py541function
12741272_tf_dataset_anytensorflow/tensorflow/python/autograph/operators/py_builtins.py552function
12751273_py_anytensorflow/tensorflow/python/autograph/operators/py_builtins.py566function
12761274all_tensorflow/tensorflow/python/autograph/operators/py_builtins.py570function
12771275_tf_dataset_alltensorflow/tensorflow/python/autograph/operators/py_builtins.py578function
12781276_py_alltensorflow/tensorflow/python/autograph/operators/py_builtins.py592function
12791277sorted_tensorflow/tensorflow/python/autograph/operators/py_builtins.py596function
12801278_tf_sortedtensorflow/tensorflow/python/autograph/operators/py_builtins.py602functionOverload of sorted_ for Tensor iterable.
12811279_py_sortedtensorflow/tensorflow/python/autograph/operators/py_builtins.py627function
12821280TestBasetensorflow/tensorflow/python/autograph/operators/py_builtins_test.py41class
12831281PyBuiltinsTesttensorflow/tensorflow/python/autograph/operators/py_builtins_test.py48class
12841282GetItemOptstensorflow/tensorflow/python/autograph/operators/slices.py34class
12851283get_itemtensorflow/tensorflow/python/autograph/operators/slices.py38functionThe slice read operator (i.e. __getitem__). Note: it is unspecified whether target will be mutated or not. In general, if target is mutable (like Python lists), it will be mutated. Args: target: An entity that supports getitem semantics. i: Index to read from. opts: A GetItemOpts object. Returns: The read element. Raises: ValueError: if target is not of a supported type.
12861284_tf_tensorarray_get_itemtensorflow/tensorflow/python/autograph/operators/slices.py70functionOverload of get_item that stages a TensorArray read.
12871285_tf_tensor_list_get_itemtensorflow/tensorflow/python/autograph/operators/slices.py75functionOverload of get_item that stages a Tensor list read.
12881286_tf_tensor_get_itemtensorflow/tensorflow/python/autograph/operators/slices.py84functionOverload of get_item that stages a Tensor (not Tensor list) read.
12891287_tf_tensor_string_get_itemtensorflow/tensorflow/python/autograph/operators/slices.py89functionOverload of get_item that stages a Tensor string read.
12901288_py_get_itemtensorflow/tensorflow/python/autograph/operators/slices.py95functionOverload of get_item that executes a Python list modification.
12911289set_itemtensorflow/tensorflow/python/autograph/operators/slices.py100functionThe slice write operator (i.e. __setitem__). Note: it is unspecified whether target will be mutated or not. In general, if target is mutable (like Python lists), it will be mutated. Args: target: An entity that supports setitem semantics. i: Index to modify. x: The new element value. Returns: Same as target, after the update was performed. Raises: ValueError: if target is not of a supported type.
12921290_tf_tensorarray_set_itemtensorflow/tensorflow/python/autograph/operators/slices.py128functionOverload of set_item that stages a TensorArray write.
12931291_tf_tensor_list_set_itemtensorflow/tensorflow/python/autograph/operators/slices.py133functionOverload of set_item that stages a Tensor list update.
12941292_tf_tensor_set_itemtensorflow/tensorflow/python/autograph/operators/slices.py138functionOverload of set_item that stages a Tensor scatter update.
12951293_py_set_itemtensorflow/tensorflow/python/autograph/operators/slices.py143functionOverload of set_item that executes a Python list modification.
12961294SlicesTesttensorflow/tensorflow/python/autograph/operators/slices_test.py27class
12971295ldtensorflow/tensorflow/python/autograph/operators/variables.py22functionLoad variable operator.
12981296ldutensorflow/tensorflow/python/autograph/operators/variables.py29functionLoad variable operator that returns Undefined when failing to evaluate. Note: the name ("load or return undefined") is abbreviated to minimize the amount of clutter in generated code. This variant of `ld` is useful when loading symbols that may be undefined at runtime, such as composite symbols, and whether they are defined or not cannot be determined statically. For example `d['a']` is undefined when `d` is an empty dict. Args: load_v: Lambda that executes the actual read. name: Human-readable name of the symbol being read. Returns: Either the value of the symbol, or Undefined, if the symbol is not fully defined.
12991297Undefinedtensorflow/tensorflow/python/autograph/operators/variables.py54classRepresents an undefined symbol in Python. This is used to reify undefined symbols, which is required to use the functional form of loops. Example: while n > 0: n = n - 1 s = n return s # Runtime error if n == 0 This is valid Python code and will not result in an error as long as n is positive. The use of this class is to stay as close to Python semantics as possible for staged code of this nature. Converted version of the above showing the possible usage of this class: s = Undefined('s') init_state = (s,) s = while_loop(cond, body, init_state) return s # s is an instance of Undefined if the loop never runs Attributes: symbol_name: Text, identifier for the undefined symbol
13001298UndefinedReturnValuetensorflow/tensorflow/python/autograph/operators/variables.py106classRepresents a return value that is undefined.
13011299SpecialValuesTesttensorflow/tensorflow/python/autograph/operators/variables_test.py25class
13021300NoValuetensorflow/tensorflow/python/autograph/pyct/anno.py37class
13031301Basictensorflow/tensorflow/python/autograph/pyct/anno.py43classContainer for basic annotation keys. The enum values are used strictly for documentation purposes.
13041302Statictensorflow/tensorflow/python/autograph/pyct/anno.py67classContainer for static analysis annotation keys. The enum values are used strictly for documentation purposes.
13051303keystensorflow/tensorflow/python/autograph/pyct/anno.py110function
13061304getannotensorflow/tensorflow/python/autograph/pyct/anno.py116function
13071305hasannotensorflow/tensorflow/python/autograph/pyct/anno.py123function
13081306setannotensorflow/tensorflow/python/autograph/pyct/anno.py127function
13091307delannotensorflow/tensorflow/python/autograph/pyct/anno.py137function
13101308copyannotensorflow/tensorflow/python/autograph/pyct/anno.py145function
13111309duptensorflow/tensorflow/python/autograph/pyct/anno.py154functionRecursively copies annotations in an AST tree. Args: node: ast.AST copy_map: Dict[Hashable, Hashable], maps a source anno key to a destination key. All annotations with the source key will be copied to identical annotations with the destination key. field_name: str
13121310AnnoTesttensorflow/tensorflow/python/autograph/pyct/anno_test.py30class
13131311CleanCopiertensorflow/tensorflow/python/autograph/pyct/ast_util.py30classNodeTransformer-like visitor that copies an AST.
13141312copy_cleantensorflow/tensorflow/python/autograph/pyct/ast_util.py63functionCreates a deep copy of an AST. The copy will not include fields that are prefixed by '__', with the exception of user-specified annotations. Args: node: ast.AST preserve_annos: Optional[Set[Hashable]], annotation keys to include in the copy Returns: ast.AST
13151313SymbolRenamertensorflow/tensorflow/python/autograph/pyct/ast_util.py79classTransformer that can rename symbols to a simple names.
13161314rename_symbolstensorflow/tensorflow/python/autograph/pyct/ast_util.py130functionRenames symbols in an AST. Requires qual_names annotations.
13171315keywords_to_dicttensorflow/tensorflow/python/autograph/pyct/ast_util.py140functionConverts a list of ast.keyword objects to a dict.
13181316PatternMatchertensorflow/tensorflow/python/autograph/pyct/ast_util.py150classMatches a node against a pattern represented by a node.
13191317matchestensorflow/tensorflow/python/autograph/pyct/ast_util.py214functionBasic pattern matcher for AST. The pattern may contain wildcards represented by the symbol '_'. A node matches a pattern if for every node in the tree, either there is a node of the same type in pattern, or a Name node with id='_'. Args: node: ast.AST pattern: ast.AST Returns: bool
13201318apply_to_single_assignmentstensorflow/tensorflow/python/autograph/pyct/ast_util.py236functionApplies a function to each individual assignment. This function can process a possibly-unpacked (e.g. a, b = c, d) assignment. It tries to break down the unpacking if possible. In effect, it has the same effect as passing the assigned values in SSA form to apply_fn. Examples: The following will result in apply_fn(a, c), apply_fn(b, d): a, b = c, d The following will result in apply_fn(a, c[0]), apply_fn(b, c[1]): a, b = c The following will result in apply_fn(a, (b, c)): a = b, c It uses the visitor pattern to allow subclasses to process single assignments individually. Args: targets: Union[List[ast.AST, ...], Tuple[ast.AST, ...], ast.AST, should be used with the targets field of an ast.Assign node values: ast.AST apply_fn: Callable[[ast.AST, ast.AST], None], called with the respective nodes of each single assignment
13211319parallel_walktensorflow/tensorflow/python/autograph/pyct/ast_util.py283functionWalks two ASTs in parallel. The two trees must have identical structure. Args: node: Union[ast.AST, Iterable[ast.AST]] other: Union[ast.AST, Iterable[ast.AST]] Yields: Tuple[ast.AST, ast.AST] Raises: ValueError: if the two trees don't have identical structure.
13221320AstUtilTesttensorflow/tensorflow/python/autograph/pyct/ast_util_test.py35class
13231321_TransformedFnCachetensorflow/tensorflow/python/autograph/pyct/cache.py26classGeneric hierarchical cache for transformed functions. The keys are soft references (i.e. they are discarded when the key is destroyed) created from the source function by `_get_key`. The subkeys are strong references and can be any value. Typically they identify different kinds of transformation.
13241322CodeObjectCachetensorflow/tensorflow/python/autograph/pyct/cache.py63classA function cache based on code objects. Code objects are good proxies for the source code of a function. This cache efficiently handles functions that share code objects, such as functions defined in a loop, bound methods, etc. The cache falls back to the function object, if it doesn't have a code object.
13251323UnboundInstanceCachetensorflow/tensorflow/python/autograph/pyct/cache.py81classA function cache based on unbound function objects. Using the function for the cache key allows efficient handling of object methods. Unlike the _CodeObjectCache, this discriminates between different functions even if they have the same code. This is needed for decorators that may masquerade as another function.
13261324CacheTesttensorflow/tensorflow/python/autograph/pyct/cache_test.py25class
13271325Nodetensorflow/tensorflow/python/autograph/pyct/cfg.py54classA node in the CFG. Although new instances of this class are mutable, the objects that a user finds in the CFG are typically not. The nodes represent edges in the CFG graph, and maintain pointers to allow efficient walking in both forward and reverse order. The following property holds for all nodes: "child in node.next" iff "node in child.prev". Attributes: next: FrozenSet[Node, ...], the nodes that follow this node, in control flow order prev: FrozenSet[Node, ...], the nodes that precede this node, in reverse control flow order ast_node: ast.AST, the AST node corresponding to this CFG node
13281326Graphtensorflow/tensorflow/python/autograph/pyct/cfg.py95classA Control Flow Graph. The CFG maintains an index to allow looking up a CFG node by the AST node to which it is associated. The index can also be enumerated in top-down, depth first order. Walking the graph in forward or reverse order is supported by double parent-child links. Note: the error nodes are not wired to their corresponding finally guards, because these are shared, and wiring them would create a reverse path from normal control flow into the error nodes, which we want to avoid. The graph also maintains edges corresponding to higher level statements like for-else loops. A node is considered successor of a statement if there is an edge from a node that is lexically a child of that statement to a node that is not. Statement predecessors are analogously defined. Attributes: entry: Node, the entry node exit: FrozenSet[Node, ...], the exit nodes error: FrozenSet[Node, ...], nodes that exit due to an explicitly raised error (errors propagated from function calls are not accounted) index: Dict[ast.Node, Node], mapping AST nodes to the respective CFG node stmt_prev: Dict[ast.Node, FrozenSet[Node, ...]], mapping statement AST nodes to their predecessor CFG nodes stmt_next: Dict[ast.Node, FrozenSet[Node, ...]], mapping statement AST nodes to their successor CFG nodes
13291327_WalkModetensorflow/tensorflow/python/autograph/pyct/cfg.py145class
13301328GraphVisitortensorflow/tensorflow/python/autograph/pyct/cfg.py152classBase class for a CFG visitors. This implementation is not thread safe. The visitor has some facilities to simplify dataflow analyses. In particular, it allows revisiting the nodes at the decision of the subclass. This can be used to visit the graph until the state reaches a fixed point. For more details on dataflow analysis, see https://www.seas.harvard.edu/courses/cs252/2011sp/slides/Lec02-Dataflow.pdf Note: the literature generally suggests visiting successor nodes only when the state of the current node changed, regardless of whether that successor has ever been visited. This implementation visits every successor at least once. Attributes: graph: Graph in_: Dict[Node, Any], stores node-keyed state during a visit out: Dict[Node, Any], stores node-keyed state during a visit
13311329GraphBuildertensorflow/tensorflow/python/autograph/pyct/cfg.py252classBuilder that constructs a CFG from a given AST. This GraphBuilder facilitates constructing the DAG that forms the CFG when nodes are supplied in lexical order (i.e., top-down, depth first). Under these conditions, it supports building patterns found in typical structured programs. This builder ignores the flow generated by exceptions, which are assumed to always be catastrophic and present purely for diagnostic purposes (e.g. to print debug information). Statements like raise and try/catch sections are allowed and will generate control flow edges, but ordinary statements are assumed not to raise exceptions. Finally sections are also correctly interleaved between break/continue/return nodes and their subsequent statements. Important concepts: * nodes - nodes refer refer to CFG nodes; AST nodes are qualified explicitly * leaf set - since the graph is constructed gradually, a leaf set maintains the CFG nodes that will precede the node that the builder expects to receive next; when an ordinary node is added, it is connected to the existing leaves and it in turn becomes the new leaf * jump nodes - nodes that should generate edges other than what ordinary nodes would; these correspond to break, continue and return statements * sections - logical delimiters for subgraphs that require special edges; there are various types of nodes, each admitting various types of jump nodes; sections are identified by their corresponding AST node
13321330AstToCfgtensorflow/tensorflow/python/autograph/pyct/cfg.py647classConverts an AST to CFGs. A separate CFG will be constructed for each function.
13331331buildtensorflow/tensorflow/python/autograph/pyct/cfg.py964function
13341332CountingVisitortensorflow/tensorflow/python/autograph/pyct/cfg_test.py28class
13351333GraphVisitorTesttensorflow/tensorflow/python/autograph/pyct/cfg_test.py42class
13361334AstToCfgTesttensorflow/tensorflow/python/autograph/pyct/cfg_test.py93class
13371335FrameInfotensorflow/tensorflow/python/autograph/pyct/error_utils.py26class
13381336_stack_trace_inside_mapped_codetensorflow/tensorflow/python/autograph/pyct/error_utils.py34functionSummarizes inner traceback frames up to the call to a given function. This functions locates the innermost (i.e. most recent) frame that corresponds to code that can be mapped by source_map originated from, and returns a translated stack trace ending at that frame. If no such frame is found, the entire stack trace is summarized. For example, the following code: def f(): for i in tf.range(1): z = y + i # z only defined here Would generate this traceback: <converted code> ag__.for_stmt(...) <for_stmt> return _known_len_tf_for_stmt(iter_, extra_test, body, init_state) <_known_len_tf_for_stmt> _disallow_undefs_into_loop(*init_state) <_disallow_undefs_into_loop> raise ... Which is then processed into: <f> for i in tf.range(1): <for_stmt> return _known_len_tf_for_stmt(iter_, extra_test, body, init_state) <_known_len_tf_for_stmt> _disallow_undefs_into_loop(*init_state) <_disallow_undefs_into_loop> raise ... Args: tb: traceback.FrameSummary, The traceback corresponding to an error. Typically, the output of traceback.Summary.extract(capture_locals=True). source_map: Dict[LineLocation, OriginInfo], a source map as created by origin_info.create_source_map. converter_filename: str, the file path of the converted module. Call frames corresponding to this module are elided and their preceding frames are marked as allowlisted. Note that frames enclosing converted code are dropped using a different mechanism. Returns: List[FrameInfo]
13391337MultilineMessageKeyErrortensorflow/tensorflow/python/autograph/pyct/error_utils.py141class
13401338ErrorMetadataBasetensorflow/tensorflow/python/autograph/pyct/error_utils.py153classContainer objects attached to exceptions raised in user code. This metadata allows re-raising exceptions that occur in generated code, with a custom error message that includes a stack trace relative to user-readable code from which the generated code originated.
13411339ErrorMetadataBaseTesttensorflow/tensorflow/python/autograph/pyct/error_utils_test.py28class
13421340PyCTErrortensorflow/tensorflow/python/autograph/pyct/errors.py22classBase class for all exceptions.
13431341UnsupportedLanguageElementErrortensorflow/tensorflow/python/autograph/pyct/errors.py27classRaised for code patterns that AutoGraph does not support.
13441342_is_constant_gast_2tensorflow/tensorflow/python/autograph/pyct/gast_util.py31function
13451343_is_constant_gast_3tensorflow/tensorflow/python/autograph/pyct/gast_util.py36function
13461344is_literaltensorflow/tensorflow/python/autograph/pyct/gast_util.py40functionTests whether node represents a Python literal.
13471345_is_ellipsis_gast_2tensorflow/tensorflow/python/autograph/pyct/gast_util.py53function
13481346_is_ellipsis_gast_3tensorflow/tensorflow/python/autograph/pyct/gast_util.py57function
13491347islambdatensorflow/tensorflow/python/autograph/pyct/inspect_utils.py60function
13501348isnamedtupletensorflow/tensorflow/python/autograph/pyct/inspect_utils.py68functionReturns True if the argument is a namedtuple-like.
13511349isbuiltintensorflow/tensorflow/python/autograph/pyct/inspect_utils.py82functionReturns True if the argument is a built-in function.
13521350isconstructortensorflow/tensorflow/python/autograph/pyct/inspect_utils.py96functionReturns True if the argument is an object constructor. In general, any object of type class is a constructor, with the exception of classes created using a callable metaclass. See below for why a callable metaclass is not a trivial combination: https://docs.python.org/2.7/reference/datamodel.html#customizing-class-creation Args: cls: Any Returns: Bool
13531351_fix_linecache_recordtensorflow/tensorflow/python/autograph/pyct/inspect_utils.py116functionFixes potential corruption of linecache in the presence of functools.wraps. functools.wraps modifies the target object's __module__ field, which seems to confuse linecache in special instances, for example when the source is loaded from a .par file (see https://google.github.io/subpar/subpar.html). This function simply triggers a call to linecache.updatecache when a mismatch was detected between the object's __module__ property and the object's source file. Args: obj: Any
13541352getimmediatesourcetensorflow/tensorflow/python/autograph/pyct/inspect_utils.py143functionA variant of inspect.getsource that ignores the __wrapped__ property.
13551353getnamespacetensorflow/tensorflow/python/autograph/pyct/inspect_utils.py151functionReturns the complete namespace of a function. Namespace is defined here as the mapping of all non-local variables to values. This includes the globals and the closure variables. Note that this captures the entire globals collection of the function, and may contain extra symbols that it does not actually use. Args: f: User defined function. Returns: A dict mapping symbol names to values.
13561354getqualifiednametensorflow/tensorflow/python/autograph/pyct/inspect_utils.py177functionReturns the name by which a value can be referred to in a given namespace. If the object defines a parent module, the function attempts to use it to locate the object. This function will recurse inside modules, but it will not search objects for attributes. The recursion depth is controlled by max_depth. Args: namespace: Dict[str, Any], the namespace to search into. object_: Any, the value to search. max_depth: Optional[int], a limit to the recursion depth when searching inside modules. visited: Optional[Set[int]], ID of modules to avoid visiting. Returns: Union[str, None], the fully-qualified name that resolves to the value o, or None if it couldn't be found.
13571355_get_unbound_functiontensorflow/tensorflow/python/autograph/pyct/inspect_utils.py240function
13581356getdefiningclasstensorflow/tensorflow/python/autograph/pyct/inspect_utils.py250functionResolves the class (e.g. one of the superclasses) that defined a method.
13591357getmethodclasstensorflow/tensorflow/python/autograph/pyct/inspect_utils.py265functionResolves a function's owner, e.g. a method's class. Note that this returns the object that the function was retrieved from, not necessarily the class where it was defined. This function relies on Python stack frame support in the interpreter, and has the same limitations that inspect.currentframe. Limitations. This function will only work correctly if the owned class is visible in the caller's global or local variables. Args: m: A user defined function Returns: The class that this function was retrieved from, or None if the function is not an object or class method, or the class that owns the object or method is not visible to m. Raises: ValueError: if the class could not be resolved for any unexpected reason.
13601358getfutureimportstensorflow/tensorflow/python/autograph/pyct/inspect_utils.py339functionDetects what future imports are necessary to safely execute entity source. Args: entity: Any object Returns: A tuple of future strings
13611359decoratortensorflow/tensorflow/python/autograph/pyct/inspect_utils_test.py37function
13621360function_decoratortensorflow/tensorflow/python/autograph/pyct/inspect_utils_test.py41function
13631361wrapping_decoratortensorflow/tensorflow/python/autograph/pyct/inspect_utils_test.py47function
13641362TestClasstensorflow/tensorflow/python/autograph/pyct/inspect_utils_test.py59class
13651363free_functiontensorflow/tensorflow/python/autograph/pyct/inspect_utils_test.py85function
13661364factorytensorflow/tensorflow/python/autograph/pyct/inspect_utils_test.py89function
13671365free_factorytensorflow/tensorflow/python/autograph/pyct/inspect_utils_test.py93function
13681366InspectUtilsTesttensorflow/tensorflow/python/autograph/pyct/inspect_utils_test.py99class
13691367_remove_filetensorflow/tensorflow/python/autograph/pyct/loader.py37functionRemove a file, if it exists.
13701368load_sourcetensorflow/tensorflow/python/autograph/pyct/loader.py50functionLoads the given source code as a Python module.
13711369load_asttensorflow/tensorflow/python/autograph/pyct/loader.py70functionLoads the given AST as a Python module. Compiling the AST code this way ensures that the source code is readable by e.g. `pdb` or `inspect`. Args: nodes: Union[ast.AST, Iterable[ast.AST]], the code to compile, as an AST object. indentation: Text, the string to use for indentation. include_source_map: bool, whether return a source map. delete_on_exit: bool, whether to delete the temporary file used for compilation on exit. Returns: Tuple[module, Text, Dict[LineLocation, OriginInfo]], containing: the module containing the unparsed nodes, the source code corresponding to nodes, and the source map. Is include_source_map is False, the source map will be None.
13721370load_sourcetensorflow/tensorflow/python/autograph/pyct/loader_deprecated_py2.py40functionLoads the given source code as a Python module.
13731371load_asttensorflow/tensorflow/python/autograph/pyct/loader_deprecated_py2.py58functionLoads the given AST as a Python module. Compiling the AST code this way ensures that the source code is readable by e.g. `pdb` or `inspect`. Args: nodes: Union[ast.AST, Iterable[ast.AST]], the code to compile, as an AST object. indentation: Text, the string to use for indentation. include_source_map: bool, whether return a source map. delete_on_exit: bool, whether to delete the temporary file used for compilation on exit. Returns: Tuple[module, Text, Dict[LineLocation, OriginInfo]], containing: the module containing the unparsed nodes, the source code corresponding to nodes, and the source map. Is include_source_map is False, the source map will be None.
13741372LoaderTesttensorflow/tensorflow/python/autograph/pyct/loader_test.py33class
13751373Namertensorflow/tensorflow/python/autograph/pyct/naming.py24classSymbol name generator.
13761374NamerTesttensorflow/tensorflow/python/autograph/pyct/naming_test.py25class
13771375LineLocationtensorflow/tensorflow/python/autograph/pyct/origin_info.py35classSimilar to Location, but without column information. Attributes: filename: Text lineno: int, 1-based
13781376Locationtensorflow/tensorflow/python/autograph/pyct/origin_info.py46classEncodes code location information. Attributes: filename: Text lineno: int, 1-based col_offset: int line_loc: LineLocation
13791377OriginInfotensorflow/tensorflow/python/autograph/pyct/origin_info.py62classContainer for information about the source code before conversion. Attributes: loc: Location function_name: Optional[Text] source_code_line: Text comment: Optional[Text]
13801378create_source_maptensorflow/tensorflow/python/autograph/pyct/origin_info.py89functionCreates a source map between an annotated AST and the code it compiles to. Note: this function assumes nodes nodes, code and filepath correspond to the same code. Args: nodes: Iterable[ast.AST, ...], one or more AST modes. code: Text, the source code in which nodes are found. filepath: Text Returns: Dict[LineLocation, OriginInfo], mapping locations in code to locations indicated by origin annotations in node.
13811379_Functiontensorflow/tensorflow/python/autograph/pyct/origin_info.py160class
13821380OriginResolvertensorflow/tensorflow/python/autograph/pyct/origin_info.py166classAnnotates an AST with additional source information like file name.
13831381resolvetensorflow/tensorflow/python/autograph/pyct/origin_info.py226functionAdds origin information to an AST, based on the source it was loaded from. This allows us to map the original source code line numbers to generated source code. Note: the AST may be a part of a larger context (e.g. a function is part of a module that may contain other things). However, this function does not assume the source argument contains the entire context, nor that it contains only code corresponding to node itself. However, it assumes that node was parsed from the given source code. For this reason, two extra arguments are required, and they indicate the location of the node in the original context. Args: node: gast.AST, the AST to annotate. source: Text, the source code representing node. context_filepath: Text context_lineno: int context_col_offset: int
13841382resolve_entitytensorflow/tensorflow/python/autograph/pyct/origin_info.py271functionLike resolve, but extracts the context information from an entity.
13851383OriginInfoTesttensorflow/tensorflow/python/autograph/pyct/origin_info_test.py34class
13861384_unfold_continuationstensorflow/tensorflow/python/autograph/pyct/parser.py60functionRemoves any backslash line continuations from the code.
13871385dedent_blocktensorflow/tensorflow/python/autograph/pyct/parser.py65functionDedents a code so that its first line starts at row zero.
13881386parse_entitytensorflow/tensorflow/python/autograph/pyct/parser.py133functionReturns the AST and source code of given entity. Args: entity: Any, Python function/method/class future_features: Iterable[Text], future features to use (e.g. 'print_statement'). See https://docs.python.org/2/reference/simple_stmts.html#future Returns: gast.AST, Text: the parsed AST node; the source code that was parsed to generate the AST (including any prefixes that this function may have added).
13891387_without_contexttensorflow/tensorflow/python/autograph/pyct/parser.py169functionReturns a clean node and source code without indenting and context.
13901388_arg_nametensorflow/tensorflow/python/autograph/pyct/parser.py203function
13911389_node_matches_argspectensorflow/tensorflow/python/autograph/pyct/parser.py212functionReturns True is node fits the argspec of func.
13921390_parse_lambdatensorflow/tensorflow/python/autograph/pyct/parser.py234functionReturns the AST and source code of given lambda function. Args: lam: types.LambdaType, Python function/method/class Returns: gast.AST, Text: the parsed AST node; the source code that was parsed to generate the AST (including any prefixes that this function may have added).
13931391parsetensorflow/tensorflow/python/autograph/pyct/parser.py323functionReturns the AST of given piece of code. Args: src: Text preamble_len: Int, indicates leading nodes in the parsed AST which should be dropped. single_node: Bool, whether `src` is assumed to be represented by exactly one AST node. Returns: ast.AST
13941392parse_expressiontensorflow/tensorflow/python/autograph/pyct/parser.py347functionReturns the AST of given identifier. Args: src: A piece of code that represents a single Python expression Returns: A gast.AST object. Raises: ValueError: if src does not consist of a single Expression.
13951393unparsetensorflow/tensorflow/python/autograph/pyct/parser.py366functionReturns the source code of given AST. Args: node: The code to compile, as an AST object. indentation: Unused, deprecated. The returning code will always be indented at 4 spaces. include_encoding_marker: Bool, thether to include a comment on the first line to explicitly specify UTF-8 encoding. Returns: code: The source code generated from the AST object source_mapping: A mapping between the user and AutoGraph generated code.
13961394ParserTesttensorflow/tensorflow/python/autograph/pyct/parser_test.py31class
13971395PrettyPrintertensorflow/tensorflow/python/autograph/pyct/pretty_printer.py26classPrint AST nodes.
13981396fmttensorflow/tensorflow/python/autograph/pyct/pretty_printer.py128function
13991397PrettyPrinterTesttensorflow/tensorflow/python/autograph/pyct/pretty_printer_test.py28class
14001398CallerMustSetThistensorflow/tensorflow/python/autograph/pyct/qual_names.py36class
14011399Symboltensorflow/tensorflow/python/autograph/pyct/qual_names.py40classRepresents a Python symbol.
14021400Literaltensorflow/tensorflow/python/autograph/pyct/qual_names.py44classRepresents a Python numeric literal.
14031401QNtensorflow/tensorflow/python/autograph/pyct/qual_names.py57classRepresents a qualified name.
14041402QnResolvertensorflow/tensorflow/python/autograph/pyct/qual_names.py210classAnnotates nodes with QN information. Note: Not using NodeAnnos to avoid circular dependencies.
14051403resolvetensorflow/tensorflow/python/autograph/pyct/qual_names.py251function
14061404from_strtensorflow/tensorflow/python/autograph/pyct/qual_names.py255function
14071405QNTesttensorflow/tensorflow/python/autograph/pyct/qual_names_test.py31class
14081406QNResolverTesttensorflow/tensorflow/python/autograph/pyct/qual_names_test.py183class
14091407ContextAdjustertensorflow/tensorflow/python/autograph/pyct/templates.py35classAdjusts the ctx field of nodes to ensure consistency. This transformer can change the ctx fields of a variable, tuple and other AST elements that allow one, based on whether the element is being read or written.
14101408ReplaceTransformertensorflow/tensorflow/python/autograph/pyct/templates.py108classReplace AST nodes.
14111409_convert_to_asttensorflow/tensorflow/python/autograph/pyct/templates.py218functionConverts from a known data type to AST.
14121410replacetensorflow/tensorflow/python/autograph/pyct/templates.py234functionReplaces placeholders in a Python template. AST Name and Tuple nodes always receive the context that inferred from the template. However, when replacing more complex nodes (that can potentially contain Name children), then the caller is responsible for setting the appropriate context. Args: template: A string representing Python code. Any symbol name can be used that appears in the template code can be used as placeholder. **replacements: A mapping from placeholder names to (lists of) AST nodes that these placeholders will be replaced by. String values are also supported as a shorthand for AST Name nodes with the respective ID. Returns: An AST node or list of AST nodes with the replacements made. If the template was a function, a list will be returned. If the template was a node, the same node will be returned. If the template was a string, an AST node will be returned (a `Module` node in the case of a multi-line string, an `Expr` node otherwise). Raises: ValueError: if the arguments are incorrect.
14131411replace_as_expressiontensorflow/tensorflow/python/autograph/pyct/templates.py279functionVariant of replace that generates expressions, instead of code blocks.
14141412_CtxClearertensorflow/tensorflow/python/autograph/pyct/templates_test.py33class
14151413_parse_with_unset_ctxtensorflow/tensorflow/python/autograph/pyct/templates_test.py42function
14161414_CtxCheckertensorflow/tensorflow/python/autograph/pyct/templates_test.py48class
14171415TemplatesTesttensorflow/tensorflow/python/autograph/pyct/templates_test.py64class
14181416AnalysisLeveltensorflow/tensorflow/python/autograph/pyct/transformer.py32class
14191417Contexttensorflow/tensorflow/python/autograph/pyct/transformer.py41classContains information about a source code transformation. This object is mutable, and is updated during conversion. Not thread safe. Attributes: info: EntityInfo, immutable. namer: naming.Namer. current_origin: origin_info.OriginInfo, holds the OriginInfo of the last AST node to be processed successfully. Useful for error handling. user: An user-supplied context object. The object is opaque to the infrastructure, but will pe passed through to all custom transformations.
14201418EntityInfotensorflow/tensorflow/python/autograph/pyct/transformer.py63classContains information about a Python entity. Immutable. Examples of entities include functions and classes. Attributes: name: The name that identifies this entity. source_code: The entity's source code. source_file: The entity's source file. future_features: Tuple[Text], the future features that this entity was compiled with. See https://docs.python.org/2/reference/simple_stmts.html#future. namespace: Dict[str, ], containing symbols visible to the entity (excluding parameters).
14211419_StateStacktensorflow/tensorflow/python/autograph/pyct/transformer.py87classTemplated context manager. This class provides syntactic sugar for a stack of objects of known type. It allows accessing attributes of the object at the top of the stack directly against this object, which allows for very terse syntax. For example, this code: stack = _StateStack(Foo) stack.enter() stack.bar Is equivalent to: stack = [] stack.append(Foo()) foo = stack[-1] foo.bar See _State for more on how this is used. Attributes: type: Any, the type of objects that this stack holds level: int, the current stack depth stack: List[Any], the actual stack value: Any, the instance of the object at the top of the stack
14221420_Statetensorflow/tensorflow/python/autograph/pyct/transformer.py159classSyntactic sugar for accessing an instance of a StateStack context manager. This structure offers syntactic sugar over a dict of stacks of objects of known type. These structures are useful to keep state during AST walks. Multiple different scopes can be tracked in parallel. For example: s = _State() s[foo].enter() s[bar].enter() # this will not affect s[foo] Element access has special semantics: * keys are a data type * element values are _StateStack(type=key) objects * missing elements are automatically added, similarly to defaultdict For example, the following block : _State s s[Foo] Is equivalent to: s = {} if Foo not in s: s[Foo] = Foo() s[Foo] See Base for how it's used.
14231421NodeStateTrackertensorflow/tensorflow/python/autograph/pyct/transformer.py200classBase class for general-purpose Python code transformation. This abstract class provides helpful functions, like state tracking within the scope of arbitrary node, helpers for processing code blocks, debugging, mapping of transformed code to original code, and others. Scope-local state tracking: to keep state across nodes, at the level of (possibly nested) scopes, use enter/exit_local_scope and set/get_local. You must call enter/exit_local_scope manually, but the transformer detects when they are not properly paired. The transformer allows keeping state across calls that is local to arbitrary nodes and their descendants, using the self.state attribute. Multiple independent scopes are allowed and automatically constructed. For example, to keep track of the `If` node that encloses any `Name` node, one can write: ``` class FooType(object): def __init__(self): self.foo_property = None class DummyTransformer(NodeStateTracker, ast.NodeTransformer): def visit_If(self, node): self.state[FooType].enter() self.state[FooType].foo_property = node node = self.veneric_visit(node) self.state[FooType].exit() return node def visit_Name(self, node): self.state[FooType].foo_property # will hold the innermost enclosing if ``` Alternatively, the `enter()`/`exit()` calls can be managed by a `with` statement: ``` def visit_If(self, node): with self.state[FooType] as foo: foo.foo_property = node return self.generic_visit(node) ```
14241422Basetensorflow/tensorflow/python/autograph/pyct/transformer.py360classBase class for general-purpose Python-to-Python code transformation. This is an extension of ast.NodeTransformer that provides the additional functions offered by NodeStateTracker.
14251423CodeGeneratortensorflow/tensorflow/python/autograph/pyct/transformer.py478classBase class for general-purpose Python-to-string code transformation. Similar to Base, but outputs arbitrary strings instead of a Python AST. This uses the same visitor mechanism that the standard NodeVisitor uses, meaning that subclasses write handlers for the different kinds of nodes. New code is generated using the emit method, which appends to a code buffer that can be afterwards obtained from code_buffer. Example: class SimpleCodeGen(CodeGenerator): def visitIf(self, node): self.emit('if ') self.visit(node.test) self.emit(' { ') self.visit(node.body) self.emit(' } else { ') self.visit(node.orelse) self.emit(' } ') node = ast.parse(...) gen = SimpleCodeGen() gen.visit(node) # gen.code_buffer contains the resulting code
14261424TransformerTesttensorflow/tensorflow/python/autograph/pyct/transformer_test.py30class
14271425CodeGeneratorTesttensorflow/tensorflow/python/autograph/pyct/transformer_test.py302class
14281426_wrap_into_factorytensorflow/tensorflow/python/autograph/pyct/transpiler.py38functionWraps an AST into the body of a factory with consistent lexical context. The AST is expected to define some symbol with a name given by `entity_name`. This mechanism ensures that the resulting transformed entity has lexical scoping identical to that of the source entity, while allowing extra parametrization. Two nested factories achieve the following: 1. The inner factory dynamically creates the entity represented by `nodes`. 2. The inner factory is parametrized by a custom set of arguments. 3. The inner factory has a closure identical to that of the transformed entity. 4. The inner factory has local variables named like `args`, which `nodes` may use as additional parameters. 5. The inner factory returns the variables given by `entity_name`. 6. The outer factory is niladic. 7. The outer factory has no closure. 8. The outer factory creates the necessary lexical scope for the inner factory, so that the loaded code has the given configuration for closure/globals. 9. The outer factory returns the inner factory. Roughly speaking, the following code is generated: from __future__ import future_feature_1 from __future__ import future_feature_2 ... def outer_factory(): closure_var_1 = None closure_var_2 = None ... def inner_factory(arg_1, arg_2, ...): <<nodes>> return entity return inner_factory The lexical scoping is created using dummy symbol declarations which create local fariables in the body of the outer factory, so that the Python parser correctly marks them as free non-global variables upon load (that is, it creates cell slots for each symbol. Thes symbols are initialized with None, but their values are not expected to be used; instead, the caller is expected to replace them with the cells of the source entity. For more details, see: https://docs.python.org/3/reference/executionmodel.html#binding-of-names Args: nodes: Tuple[ast.AST], the source code to wrap. entity_name: Union[Text, ast.AST], the name of the principal entity that `nodes` define. inner_factory_name: Text, the name of the inner factory. outer_factory_name: Text, the name of the outer factory. closure_vars: Iterable[Text], names of the closure variables for the inner factory. factory_args: Iterable[Text], names of additional arguments for the inner factory. Useful to configure variables that the converted code can use. Typically, these are modules. future_features: Iterable[Text], names of future statements to associate the code with. Returns: ast.AST
14291427_PythonFnFactorytensorflow/tensorflow/python/autograph/pyct/transpiler.py147classHelper object that wraps a Python function factory.
14301428GenericTranspilertensorflow/tensorflow/python/autograph/pyct/transpiler.py227classA generic transpiler for Python functions. Its interface is the `transform` API, which can process Python function objects. Internally, it handles parsing. Users typically subclass this, customizing the `transform_ast` method. The output of transformed_ast is returned directly by `transform`. Existing methods like `transform_function` may also be overloaded. Example: class MyTransformer(GenericTranspiler): def transform_ast(self, node, ctx): result = <<transform node>> return result transformer = MyTransfomer() result = transformer.transform(f, ...) # result is the output
14311429PyToPytensorflow/tensorflow/python/autograph/pyct/transpiler.py368classA generic Python-to-Python transpiler. Its `transform` method offers a function-in, function-out interface. Internally, it takes care of parsing, caching and loading of the translated code. Users typically subclass this, overriding `transform_ast`. Usually, instances of this class are singletons, since each instance manages its own cache. The caching can be controlled by overriding `get_caching_key`. Example: class MyTransformer(PyToPy): def transform_ast(self, node, ctx): node = <<transform node, usually using ast.NodeTransformer classes>> return node transformer = MyTransfomer() new_f, module, source_map = transformer.transform_function(f, ...) # new_f is a function with signature identical to f The transformed function has access to the same namespace as the original function. To allow access to internal APIs, users may inject additional symbols by overriding `get_extra_locals`.
14321430FlipSignTransformertensorflow/tensorflow/python/autograph/pyct/transpiler_test.py30class
14331431TestTranspilertensorflow/tensorflow/python/autograph/pyct/transpiler_test.py38class
14341432PyToPyTesttensorflow/tensorflow/python/autograph/pyct/transpiler_test.py55class
14351433DummyGensymtensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py40classA dumb gensym that suffixes a stem by sequential numbers from 1000.
14361434ASTEdgePatterntensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py60classA pattern defining a type of AST edge. This consists of three components: - The type of the parent node, checked with isinstance, - The name of the field, checked with string equality, and - The type of the child node, also checked with isinstance. If all three match, the whole pattern is considered to match. In all three slots, the special value `anf.ANY` is treated as "match anything". The internal nodes are produced from the `gast` library rather than the standard `ast` module, which may affect `isinstance` checks.
14371435AnfTransformertensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py89classPerforms the conversion to A-normal form (ANF).
14381436_is_py2_name_constanttensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py483function
14391437_is_trivialtensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py487functionReturns whether to consider the given node 'trivial'. The definition of 'trivial' is a node that can't meaningfully be pulled out into its own assignment statement. This is surprisingly difficult to do robustly across versions of Python and gast, as the parsing of constants has changed, if I may, constantly. Args: node: An AST node to check for triviality Returns: trivial: A Python `bool` indicating whether the node is trivial.
14401438transformtensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py527functionConverts the given node to A-normal form (ANF). The general idea of A-normal form: https://en.wikipedia.org/wiki/A-normal_form The specific converters used here are based on Python AST semantics as documented at https://greentreesnakes.readthedocs.io/en/latest/. What exactly should be considered A-normal form for any given programming language is not completely obvious. The transformation defined here is therefore configurable as to which syntax to replace with a fresh variable and which to leave be. The configuration is intentionally flexible enough to define very precise variable insertion transformations, should that be desired. The configuration is a list of syntax rules, each of which is a 2-tuple: - An `ASTEdgePattern` (which see) defining a type of AST edge, and - Whether to transform children of such edges. The special object `anf.ANY` may be used as a pattern that matches all edges. Each replacement directive is one of three possible things: - The object `anf.REPLACE`, meaning "Replace this child node with a variable", - The object `anf.LEAVE`, meaning "Do not replace this child node with a variable", or - A Python callable. If a callable, it is called with the parent node, the field name, and the child node, and must compute a boolean indicating whether to transform the child node or not. The callable is free to use whatever context information it chooses. The callable may be invoked more than once on the same link, and must produce the same answer each time. The syntax rules are tested in order, and the first match governs. If no rule matches, the node is not transformed. The above rules notwithstanding, - Variable references are never replaced with (fresh) variables, as that would accomplish nothing. - The left-hand children of Assign and AugAssign nodes, and the children of Del nodes, are never replaced with variables, as that would break their semantics. - The right-hand children of Assign nodes are never replaced with variables, as the original assignment would still have to be present in the result to define the new variable. (That is, there's no point in transforming `x = sin(y)` into `tmp = sin(y); x = tmp`.) - The right-hand children of AugAssign nodes are never replaced with variables either, but only because the difference from Assign was considered a potential source of confusion (and it would have been slightly awkward in the code to treat the RHS differently than the LHS). - Various special-purpose AST nodes are not exposed to the configuration, lest the transform produce invalid syntax like, e.g., `tmp = +; x = 1 tmp 2`. For example, the configuration ```python [(anf.ASTEdgePattern(anf.ANY, anf.ANY, gast.expr), anf.REPLACE)] ``` gives explicit fresh names to all expressions regardless of context (except as outlined above), whereas ```python [(anf.ASTEdgePattern(gast.If, "test", anf.ANY), anf.REPLACE)] ``` only transforms the conditionals of `if` statements (but not, e.g., `while`). If no configuration is supplied, the default behavior is to transform all expressions except literal constants, which is defined as a configuration as ```python # For Python 3, and gast library versions before 0.3 literals = (gast.Num, gast.Str, gast.Bytes, gast.NameConstant) [(anf.ASTEdgePattern(anf.ANY, anf.ANY, literals), anf.LEAVE), (anf.ASTEdgePattern(anf.ANY, anf.ANY, gast.expr), anf.REPLACE)] ``` Args: node: The node to transform. ctx: transformer.EntityInfo. TODO(mdan): What information does this argument provide? config: Optional ANF configuration. If omitted, ANF replaces all expression expect literal constants.
14411439exec_test_functiontensorflow/tensorflow/python/autograph/pyct/common_transformers/anf_test.py34function
14421440exec_expected_resulttensorflow/tensorflow/python/autograph/pyct/common_transformers/anf_test.py40function
14431441AnfTestBasetensorflow/tensorflow/python/autograph/pyct/common_transformers/anf_test.py49class
14441442AnfTransformerTesttensorflow/tensorflow/python/autograph/pyct/common_transformers/anf_test.py85class
14451443AnfNonTransformationTesttensorflow/tensorflow/python/autograph/pyct/common_transformers/anf_test.py433classTest that specifying "no transformation" does nothing. Reuses all the examples of AnfTransformerTest by overriding `assert_body_anfs_as_expected_`.
14461444AnfConfiguredTesttensorflow/tensorflow/python/autograph/pyct/common_transformers/anf_test.py454class
14471445Scopetensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py36classEncloses local symbol definition and usage information. This can track for instance whether a symbol is modified in the current scope. Note that scopes do not necessarily align with Python's scopes. For example, the body of an if statement may be considered a separate scope. Caution - the AST references held by this object are weak. Scope objects are mutable during construction only, and must be frozen using `Scope.finalize()` before use. Furthermore, a scope is consistent only after all its children have been frozen. While analysing code blocks, scopes are being gradually built, from the innermost scope outward. Freezing indicates that the analysis of a code block is complete. Once frozen, mutation is no longer allowed. `is_final` tracks whether the scope is frozen or not. Certain properties, like `referenced`, are only accurate when called on frozen scopes. Attributes: parent: Optional[Scope], the parent scope, if any. isolated: bool, whether the scope is a true Python scope (e.g. the scope of a function), or just a surrogate tracking an ordinary code block. Using the terminology of the Python 3 reference documentation, True roughly represents an actual scope, whereas False represents an ordinary code block. function_name: Optional[str], name of the function owning this scope. isolated_names: Set[qual_names.QN], identifiers that are isolated to this scope (even if the scope is not isolated). annotations: Set[qual_names.QN], identifiers used as type annotations in this scope. read: Set[qual_names.QN], identifiers read in this scope. modified: Set[qual_names.QN], identifiers modified in this scope. deleted: Set[qual_names.QN], identifiers deleted in this scope. bound: Set[qual_names.QN], names that are bound to this scope. See https://docs.python.org/3/reference/executionmodel.html#binding-of-names for a precise definition. globals: Set[qual_names.QN], names that are explicitly marked as global in this scope. Note that this doesn't include free read-only vars bound to global symbols. nonlocals: Set[qual_names.QN], names that are explicitly marked as nonlocal in this scope. Note that this doesn't include free read-only vars bound to global symbols. free_vars: Set[qual_names.QN], the free variables in this scope. See https://docs.python.org/3/reference/executionmodel.html for a precise definition. params: WeakValueDictionary[qual_names.QN, ast.Node], function arguments visible in this scope, mapped to the function node that defines them. enclosing_scope: Scope, the innermost isolated scope that is a transitive parent of this scope. May be the scope itself. referenced: Set[qual_names.QN], the totality of the symbols used by this scope and its parents. is_final: bool, whether the scope is frozen or not. Note - simple statements may never delete and modify a symbol at the same time. However, compound ones like if statements can. In that latter case, it's undefined whether the symbol is actually modified or deleted upon statement exit. Certain analyses like reaching definitions need to be careful about this.
14481446_Comprehensiontensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py214class
14491447_FunctionOrClasstensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py224class
14501448ActivityAnalyzertensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py230classAnnotates nodes with local scope information. See Scope. The use of this class requires that qual_names.resolve() has been called on the node. This class will ignore nodes have not been annotated with their qualified names.
14511449resolvetensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py707function
14521450ActivityAnalyzerTesttensorflow/tensorflow/python/autograph/pyct/static_analysis/activity_py3_test.py31classTests which can only run in Python 3.
14531451ScopeTesttensorflow/tensorflow/python/autograph/pyct/static_analysis/activity_test.py41class
14541452ActivityAnalyzerTestBasetensorflow/tensorflow/python/autograph/pyct/static_analysis/activity_test.py114class
14551453ActivityAnalyzerTesttensorflow/tensorflow/python/autograph/pyct/static_analysis/activity_test.py148class
14561454NoValuetensorflow/tensorflow/python/autograph/pyct/static_analysis/annos.py27class
14571455NodeAnnotensorflow/tensorflow/python/autograph/pyct/static_analysis/annos.py33classAdditional annotations used by the static analyzer. These are in addition to the basic annotations declared in anno.py.
14581456Analyzertensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness.py40classCFG visitor that performs liveness analysis at statement level.
14591457TreeAnnotatortensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness.py96classRuns liveness analysis on each of the functions defined in the AST. If a function defined other local functions, those will have separate CFGs. However, dataflow analysis needs to tie up these CFGs to properly emulate the effect of closures. In the case of liveness, the parent function's live variables must account for the variables that are live at the entry of each subfunction. For example: def foo(): # baz is live from here on def bar(): print(baz) This analyzer runs liveness analysis on each individual function, accounting for the effect above.
14601458resolvetensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness.py206functionResolves the live symbols at the exit of control flow statements. Args: node: ast.AST source_info: transformer.SourceInfo graphs: Dict[ast.FunctionDef, cfg.Graph] include_annotations: Bool, whether type annotations should be included in the analysis. Returns: ast.AST
14611459LivenessAnalyzerTesttensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness_py3_test.py30classTests which can only run in Python 3.
14621460LivenessAnalyzerTestBasetensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness_test.py37class
14631461LivenessAnalyzerTesttensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness_test.py76class
14641462Definitiontensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py40classDefinition objects describe a unique definition of a variable. Subclasses of this may be used by passing an appropriate factory function to resolve. Attributes: param_of: Optional[ast.AST] directives: Dict, optional definition annotations
14651463_NodeStatetensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py59classAbstraction for the state of the CFG walk for reaching definition analysis. This is a value type. Only implements the strictly necessary operators. Attributes: value: Dict[qual_names.QN, Set[Definition, ...]], the defined symbols and their possible definitions
14661464Analyzertensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py112classCFG visitor that determines reaching definitions at statement level.
14671465TreeAnnotatortensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py169classAST visitor that annotates each symbol name with its reaching definitions. Simultaneously, the visitor runs the dataflow analysis on each function node, accounting for the effect of closures. For example: def foo(): bar = 1 def baz(): # bar = 1 reaches here
14681466resolvetensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py279functionResolves reaching definitions for each symbol. Args: node: ast.AST source_info: transformer.SourceInfo graphs: Dict[ast.FunctionDef, cfg.Graph] definition_factory: Callable[[], Definition] Returns: ast.AST
14691467ReachingDefinitionsAnalyzerTesttensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions_py3_test.py26classTests which can only run in Python 3.
14701468ReachingDefinitionsAnalyzerTestBasetensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions_test.py38class
14711469ReachingDefinitionsAnalyzerTesttensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions_test.py88class
14721470Definitiontensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_fndefs.py32classDefinition objects describe a unique definition of a function.
14731471_NodeStatetensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_fndefs.py39classAbstraction for the state of the CFG walk for reaching definition analysis. This is a value type. Only implements the strictly necessary operators. Attributes: value: Dict[qual_names.QN, Set[Definition, ...]], the defined symbols and their possible definitions
14741472Analyzertensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_fndefs.py76classCFG visitor that determines reaching definitions at statement level.
14751473TreeAnnotatortensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_fndefs.py109classAST visitor that annotates each symbol name with its reaching definitions. Simultaneously, the visitor runs the dataflow analysis on each function node, accounting for the effect of closures. For example: def foo(): def f(): pass def g(): # `def f` reaches here
14761474resolvetensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_fndefs.py170functionResolves reaching definitions for each symbol. Args: node: ast.AST source_info: transformer.SourceInfo graphs: Dict[ast.FunctionDef, cfg.Graph] Returns: ast.AST
14771475ReachingFndefsAnalyzerTesttensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_fndefs_test.py33class
14781476Resolvertensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py41classResolver objects handle the process of looking up actual names and types. All resolve_* methods: * have a first namespace argument, mapping string to actual values * specify names as QN objects * specify types as a Set of inferred types All resolve_* methods must return either: * a set of `type` objects * None
14791477_SymbolTabletensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py83classAbstraction for the state of the CFG walk for type inference. This is a value type. Only implements the strictly necessary operators. Attributes: value: Dict[qual_names.QN, Set[Type]], mapping symbols to the set of possible types.
14801478StmtInferrertensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py162classRuns type inference on a single AST statement. This visitor annotates most nodes with type information. It also sets types for the symbols modified by this statement in its types_out property.
14811479Analyzertensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py329classCFG visitor that propagates type information across statements.
14821480FunctionVisitortensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py394classAST visitor that applies type inference to each function separately.
14831481resolvetensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py417functionPerforms type inference. Args: node: ast.AST source_info: transformer.SourceInfo graphs: Dict[ast.FunctionDef, cfg.Graph] resolver: Resolver Returns: ast.AST
14841482TestResolvertensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference_test.py32classA very basic resolver for testing.
14851483TestTranspilertensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference_test.py58class
14861484TypeInferenceAnalyzerTesttensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference_test.py77class
14871485simple_functiontensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py23functionDocstring.
14881486nested_functionstensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py28functionDocstring.
14891487function_with_printtensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py37function
14901488SimpleClasstensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py44class
14911489function_with_multiline_calltensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py53functionDocstring.
14921490basic_decoratortensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py61function
14931491decorated_functiontensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py67function
14941492NodeSamplertensorflow/tensorflow/python/autograph/pyct/testing/codegen.py30class
14951493StatementSamplertensorflow/tensorflow/python/autograph/pyct/testing/codegen.py39class
14961494ExpressionSamplertensorflow/tensorflow/python/autograph/pyct/testing/codegen.py49class
14971495CompareSamplertensorflow/tensorflow/python/autograph/pyct/testing/codegen.py58class
14981496BinaryOpSamplertensorflow/tensorflow/python/autograph/pyct/testing/codegen.py71class
14991497UnaryOpSamplertensorflow/tensorflow/python/autograph/pyct/testing/codegen.py83class
15001498NameSamplertensorflow/tensorflow/python/autograph/pyct/testing/codegen.py87class
15011499CodeGeneratortensorflow/tensorflow/python/autograph/pyct/testing/codegen.py98classGenerate random syntactically-valid Python ASTs.
15021500generate_random_functiondeftensorflow/tensorflow/python/autograph/pyct/testing/codegen.py233function
15031501CodeGenTesttensorflow/tensorflow/python/autograph/pyct/testing/codegen_test.py28class
15041502wrapping_decoratortensorflow/tensorflow/python/autograph/pyct/testing/decorators.py24function
15051503standalone_decoratortensorflow/tensorflow/python/autograph/pyct/testing/decorators.py33function
15061504functional_decoratortensorflow/tensorflow/python/autograph/pyct/testing/decorators.py41function
15071505set_verbositytensorflow/tensorflow/python/autograph/utils/ag_logging.py41functionSets the AutoGraph verbosity level. _Debug logging in AutoGraph_ More verbose logging is useful to enable when filing bug reports or doing more in-depth debugging. There are two means to control the logging verbosity: * The `set_verbosity` function * The `AUTOGRAPH_VERBOSITY` environment variable `set_verbosity` takes precedence over the environment variable. For example: ```python import os import tensorflow as tf os.environ['AUTOGRAPH_VERBOSITY'] = 5 # Verbosity is now 5 tf.autograph.set_verbosity(0) # Verbosity is now 0 os.environ['AUTOGRAPH_VERBOSITY'] = 1 # No effect, because set_verbosity was already called. ``` Logs entries are output to [absl](https://abseil.io)'s [default output](https://abseil.io/docs/python/guides/logging), with `INFO` level. Logs can be mirrored to stdout by using the `alsologtostdout` argument. Mirroring is enabled by default when Python runs in interactive mode. Args: level: int, the verbosity level; larger values specify increased verbosity; 0 means no logging. When reporting bugs, it is recommended to set this value to a larger number, like 10. alsologtostdout: bool, whether to also output log messages to `sys.stdout`.
15081506tracetensorflow/tensorflow/python/autograph/utils/ag_logging.py92functionTraces argument information at compilation time. `trace` is useful when debugging, and it always executes during the tracing phase, that is, when the TF graph is constructed. _Example usage_ ```python import tensorflow as tf for i in tf.range(10): tf.autograph.trace(i) # Output: <Tensor ...> ``` Args: *args: Arguments to print to `sys.stdout`.
15091507get_verbositytensorflow/tensorflow/python/autograph/utils/ag_logging.py114function
15101508has_verbositytensorflow/tensorflow/python/autograph/utils/ag_logging.py121function
15111509_output_to_stdouttensorflow/tensorflow/python/autograph/utils/ag_logging.py125function
15121510errortensorflow/tensorflow/python/autograph/utils/ag_logging.py131function
15131511logtensorflow/tensorflow/python/autograph/utils/ag_logging.py138function
15141512warntensorflow/tensorflow/python/autograph/utils/ag_logging.py145function
15151513BasicReftensorflow/tensorflow/python/autograph/utils/compat_util.py27classThis shim emulates the nonlocal keyword in Py2-compatible source.
15161514deprecated_py2_supporttensorflow/tensorflow/python/autograph/utils/compat_util.py34functionSwaps calling module with a Py2-specific implementation. Noop in Py3.
15171515control_dependency_on_returnstensorflow/tensorflow/python/autograph/utils/context_managers.py27functionCreate a TF control dependency on the return values of a function. If the function had no return value, a no-op context is returned. Args: return_value: The return value to set as control dependency. Returns: A context manager.
15181516ContextManagersTesttensorflow/tensorflow/python/autograph/utils/context_managers_test.py28class
15191517alias_tensorstensorflow/tensorflow/python/autograph/utils/misc.py27functionWraps any Tensor arguments with an identity op. Any other argument, including Variables, is returned unchanged. Args: *args: Any arguments. Must contain at least one element. Returns: Same as *args, with Tensor instances replaced as described. Raises: ValueError: If args doesn't meet the requirements.
15201518get_range_lentensorflow/tensorflow/python/autograph/utils/misc.py55function
15211519MiscTesttensorflow/tensorflow/python/autograph/utils/misc_test.py30class
15221520MatchDTypetensorflow/tensorflow/python/autograph/utils/py_func.py28classAllows matching the dtype of an argument. Used in conjunction with function calls. For example, MatchDType(0) will match the DType of the first argument.
15231521wrap_py_functensorflow/tensorflow/python/autograph/utils/py_func.py38functionHelper that wraps a callable to py_func. The helper passes tensor arguments through the py_func interface. Non-tensor arguments are allowed, and will be passed to f directly. Note that non-tensor arguments are captured by f will not update every time the wrapper is called (this is consistent with its argument list, which only includes the tensor arguments). In general, it's safest not to reuse this wrapper. Args: f: Callable return_dtypes: None, individual of tuple/list of DType or MatchDType, the data type for each of f's return value(s). Set to None if f has no return values or use_dummy_return is True. Use MatchDType to define a dtype identical to that of `i`th argument (argument 0 is the first); an argument must of Tensor type if it is to be used with MatchDType. args: Positional arguments for f, as list or tuple. kwargs: Keyword arguments for f, as dict with string keys. May be None. use_dummy_return: If True, the function will return a dummy value of 1 and discard its actual return value. Returns: The return values of f converted to tensor. Raises: ValueError: if any of the arguments are incorrect.
15241522PyFuncTesttensorflow/tensorflow/python/autograph/utils/py_func_test.py27class
15251523dynamic_list_appendtensorflow/tensorflow/python/autograph/utils/tensor_list.py26functionConverts a list append call inline.
15261524TensorListtensorflow/tensorflow/python/autograph/utils/tensor_list.py43classTensor list wrapper API-compatible with Python built-in list.
15271525TensorListTesttensorflow/tensorflow/python/autograph/utils/tensor_list_test.py32class
15281526is_dense_tensortensorflow/tensorflow/python/autograph/utils/tensors.py32function
15291527is_tensor_arraytensorflow/tensorflow/python/autograph/utils/tensors.py38function
15301528is_tensor_listtensorflow/tensorflow/python/autograph/utils/tensors.py42function
15311529is_range_tensortensorflow/tensorflow/python/autograph/utils/tensors.py51functionReturns True if a tensor is the result of a tf.range op. Best effort.
15321530TensorsTesttensorflow/tensorflow/python/autograph/utils/tensors_test.py30class
15331531AutoGraphTestCasetensorflow/tensorflow/python/autograph/utils/testing.py30classTests specialized for AutoGraph, which run as tf.functions. These tests use a staged programming-like approach: most of the test code runs as-is inside a tf.function, but the assertions are lifted outside the function, and run with the corresponding function values instead. For example, the test: def test_foo(self): baz = bar(); self.assertEqual(baz, value) is equivalent to writing: def test_foo(self): @tf.function def test_fn(): baz = bar(); return baz, value baz_actual, value_actual = test_fn() self.assertEqual(baz_actual, value_actual)
15341532list_local_devicestensorflow/tensorflow/python/client/device_lib.py25functionList the available devices available in the local process. Args: session_config: a session config proto or None to use the default config. Returns: A list of `DeviceAttribute` protocol buffers.
15351533DeviceLibTesttensorflow/tensorflow/python/client/device_lib_test.py28class
15361534PywrapeventsWriterTesttensorflow/tensorflow/python/client/events_writer_test.py33class
15371535maintensorflow/tensorflow/python/client/notebook.py53function
15381536TF_NewSessionOptionstensorflow/tensorflow/python/client/pywrap_tf_session.py51function
15391537TF_Resettensorflow/tensorflow/python/client/pywrap_tf_session.py65function
15401538SessionInterfacetensorflow/tensorflow/python/client/session.py51classBase class for implementations of TensorFlow client sessions.
15411539_get_indexed_slices_value_from_fetchestensorflow/tensorflow/python/client/session.py77function
15421540_get_feeds_for_indexed_slicestensorflow/tensorflow/python/client/session.py83function
15431541_convert_to_numpy_objtensorflow/tensorflow/python/client/session.py139functionExplicitly convert obj based on numpy type except for string type.
15441542register_session_run_conversion_functionstensorflow/tensorflow/python/client/session.py144functionRegister fetch and feed conversion functions for `tf.Session.run()`. This function registers a triple of conversion functions for fetching and/or feeding values of user-defined types in a call to tf.Session.run(). An example ```python class SquaredTensor(object): def __init__(self, tensor): self.sq = tf.square(tensor) #you can define conversion functions as follows: fetch_function = lambda squared_tensor:([squared_tensor.sq], lambda val: val[0]) feed_function = lambda feed, feed_val: [(feed.sq, feed_val)] feed_function_for_partial_run = lambda feed: [feed.sq] #then after invoking this register function, you can use as follows: session.run(squared_tensor1, feed_dict = {squared_tensor2 : some_numpy_array}) ``` Args: tensor_type: The type for which you want to register a conversion function. fetch_function: A callable that takes an object of type `tensor_type` and returns a tuple, where the first element is a list of `tf.Tensor` objects, and the second element is a callable that takes a list of ndarrays and returns an object of some value type that corresponds to `tensor_type`. fetch_function describes how to expand fetch into its component Tensors and how to contract the fetched results back into a single return value. feed_function: A callable that takes feed_key and feed_value as input, and returns a list of tuples (feed_tensor, feed_val), feed_key must have type `tensor_type`, and feed_tensor must have type `tf.Tensor`. Each feed function describes how to unpack a single fed value and map it to feeds of one or more tensors and their corresponding values. feed_function_for_partial_run: A callable for specifying tensor values to feed when setting up a partial run, which takes a `tensor_type` type object as input, and returns a list of Tensors. Raises: ValueError: If `tensor_type` has already been registered.
15451543_is_attrs_instancetensorflow/tensorflow/python/client/session.py199functionReturns True if the given obj is an instance of attrs-decorated class.
15461544_get_attrs_valuestensorflow/tensorflow/python/client/session.py204functionReturns the list of values from an attrs instance.
15471545_FetchMappertensorflow/tensorflow/python/client/session.py210classDefinition of the interface provided by fetch mappers. Fetch mappers are utility classes used by the _FetchHandler to handle arbitrary structures for the `fetch` argument to `Session.run()`. The `fetch` argument can be of various shapes: single tensor or op, list of fetches, tuple of fetches, namedtuple of fetches, or dict of fetches. The structures can be arbitrarily nested. The low level run() API only wants a list of tensor or op names. The various `_FetchMapper` subclasses below take care of handling the different shapes: uniquifying the fetches, and constructing results with the original shape.
15481546_ElementFetchMappertensorflow/tensorflow/python/client/session.py282classFetch mapper for singleton tensors and ops.
15491547_uniquify_fetchestensorflow/tensorflow/python/client/session.py329functionUniquifies fetches from a list of fetch_mappers. This is a utility function used by _ListFetchMapper and _DictFetchMapper. It gathers all the unique fetches from a list of mappers and builds a list containing all of them but without duplicates (unique_fetches). It also returns a 2-D list of integers (values_indices) indicating at which index in unique_fetches the fetches of the mappers are located. This list is as follows: values_indices[mapper_index][mapper_fetch_index] = unique_fetches_index Args: fetch_mappers: list of fetch mappers. Returns: A list of fetches. A 2-D list of integers.
15501548_ListFetchMappertensorflow/tensorflow/python/client/session.py365classFetch mapper for lists, tuples, and namedtuples.
15511549_DictFetchMappertensorflow/tensorflow/python/client/session.py399classFetch mapper for dicts.
15521550_AttrsFetchMappertensorflow/tensorflow/python/client/session.py425classFetch mapper for attrs decorated classes.
15531551_FetchHandlertensorflow/tensorflow/python/client/session.py449classHandler for structured fetches. Given a graph, a user-provided structure for fetches, and a feed dict, this class takes care of generating a list of tensor names to fetch and op names to run for a low level `run()` call. Given the results of the low level run call, this class can also rebuild a result structure matching the user-provided structure for fetches, but containing the corresponding results.
15541552_name_listtensorflow/tensorflow/python/client/session.py573functionUtility function for transitioning to the new session API. Args: tensor_list: a list of `Tensor`s. Returns: A list of each `Tensor`s name (as byte arrays).
15551553_DeviceAttributestensorflow/tensorflow/python/client/session.py585classStruct-like object describing a device's attributes. Each device has 3 key properties: - name: the fully-qualified TensorFlow path to the device. For example: /job:worker/replica:0/task:3/device:CPU:0 - device_type: the type of the device (e.g. CPU, GPU, TPU, etc.) - memory_limit_bytes: the maximum amount of memory available on the device (in bytes).
15561554BaseSessiontensorflow/tensorflow/python/client/session.py627classA class for interacting with a TensorFlow computation. The BaseSession enables incremental graph building with inline execution of Operations and evaluation of Tensors.
15571555Sessiontensorflow/tensorflow/python/client/session.py1509classA class for running TensorFlow operations. A `Session` object encapsulates the environment in which `Operation` objects are executed, and `Tensor` objects are evaluated. For example: ```python tf.compat.v1.disable_eager_execution() # need to disable eager in TF2.x # Build a graph. a = tf.constant(5.0) b = tf.constant(6.0) c = a * b # Launch the graph in a session. sess = tf.compat.v1.Session() # Evaluate the tensor `c`. print(sess.run(c)) # prints 30.0 ``` A session may own resources, such as `tf.Variable`, `tf.queue.QueueBase`, and `tf.compat.v1.ReaderBase`. It is important to release these resources when they are no longer required. To do this, either invoke the `tf.Session.close` method on the session, or use the session as a context manager. The following two examples are equivalent: ```python # Using the `close()` method. sess = tf.compat.v1.Session() sess.run(...) sess.close() # Using the context manager. with tf.compat.v1.Session() as sess: sess.run(...) ``` The [`ConfigProto`](https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto) protocol buffer exposes various configuration options for a session. For example, to create a session that uses soft constraints for device placement, and log the resulting placement decisions, create a session as follows: ```python # Launch the graph in a session that allows soft device placement and # logs the placement decisions. sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto( allow_soft_placement=True, log_device_placement=True)) ```
15581556InteractiveSessiontensorflow/tensorflow/python/client/session.py1679classA TensorFlow `Session` for use in interactive contexts, such as a shell. The only difference with a regular `Session` is that an `InteractiveSession` installs itself as the default session on construction. The methods `tf.Tensor.eval` and `tf.Operation.run` will use that session to run ops. This is convenient in interactive shells and [IPython notebooks](http://ipython.org), as it avoids having to pass an explicit `Session` object to run ops. For example: ```python sess = tf.compat.v1.InteractiveSession() a = tf.constant(5.0) b = tf.constant(6.0) c = a * b # We can just use 'c.eval()' without passing 'sess' print(c.eval()) sess.close() ``` Note that a regular session installs itself as the default session when it is created in a `with` statement. The common usage in non-interactive programs is to follow that pattern: ```python a = tf.constant(5.0) b = tf.constant(6.0) c = a * b with tf.compat.v1.Session(): # We can also use 'c.eval()' here. print(c.eval()) ```
15591557SessionBenchmarktensorflow/tensorflow/python/client/session_benchmark.py36classTests and benchmarks for interacting with the `tf.compat.v1.Session`.
15601558SessionClusterSpecPropagationTesttensorflow/tensorflow/python/client/session_clusterspec_prop_test.py45class
15611559SessionListDevicesTesttensorflow/tensorflow/python/client/session_list_devices_test.py33class
15621560PartialRunTesttensorflow/tensorflow/python/client/session_partial_run_test.py35class
15631561SessionTesttensorflow/tensorflow/python/client/session_test.py72class
15641562AllocationMaximumtensorflow/tensorflow/python/client/timeline.py32classStores the maximum allocation for a given allocator within the timelne. Parameters: timestamp: `tensorflow::Env::NowMicros()` when this maximum was reached. num_bytes: the total memory used at this time. tensors: the set of tensors allocated at this time.
15651563StepStatsAnalysistensorflow/tensorflow/python/client/timeline.py44classStores the step stats analysis output. Parameters: chrome_trace: A dict containing the chrome trace analysis. allocator_maximums: A dict mapping allocator names to AllocationMaximum.
15661564_ChromeTraceFormattertensorflow/tensorflow/python/client/timeline.py55classA helper class for generating traces in Chrome Trace Format.
15671565_TensorTrackertensorflow/tensorflow/python/client/timeline.py265classAn internal class to track the lifetime of a Tensor.
15681566Timelinetensorflow/tensorflow/python/client/timeline.py346classA class for visualizing execution timelines of TensorFlow steps.
15691567TimelineTesttensorflow/tensorflow/python/client/timeline_test.py34class
15701568VirtualGpuTestUtiltensorflow/tensorflow/python/client/virtual_gpu_test.py38class
15711569VirtualGpuTesttensorflow/tensorflow/python/client/virtual_gpu_test.py195class
15721570_date_to_date_numbertensorflow/tensorflow/python/compat/compat.py41function
15731571_update_forward_compatibility_date_numbertensorflow/tensorflow/python/compat/compat.py45functionUpdate the base date to compare in forward_compatible function.
15741572forward_compatibletensorflow/tensorflow/python/compat/compat.py70functionReturn true if the forward compatibility window has expired. See [Version compatibility](https://tensorflow.org/guide/version_compat#backward_forward). Forward-compatibility refers to scenarios where the producer of a TensorFlow model (a GraphDef or SavedModel) is compiled against a version of the TensorFlow library newer than what the consumer was compiled against. The "producer" is typically a Python program that constructs and trains a model while the "consumer" is typically another program that loads and serves the model. TensorFlow has been supporting a 3 week forward-compatibility window for programs compiled from source at HEAD. For example, consider the case where a new operation `MyNewAwesomeAdd` is created with the intent of replacing the implementation of an existing Python wrapper - `tf.add`. The Python wrapper implementation should change from something like: ```python def add(inputs, name=None): return gen_math_ops.add(inputs, name) ``` to: ```python from tensorflow.python.compat import compat def add(inputs, name=None): if compat.forward_compatible(year, month, day): # Can use the awesome new implementation. return gen_math_ops.my_new_awesome_add(inputs, name) # To maintain forward compatibility, use the old implementation. return gen_math_ops.add(inputs, name) ``` Where `year`, `month`, and `day` specify the date beyond which binaries that consume a model are expected to have been updated to include the new operations. This date is typically at least 3 weeks beyond the date the code that adds the new operation is committed. Args: year: A year (e.g., 2018). Must be an `int`. month: A month (1 <= month <= 12) in year. Must be an `int`. day: A day (1 <= day <= 31, or 30, or 29, or 28) in month. Must be an `int`. Returns: True if the caller can expect that serialized TensorFlow graphs produced can be consumed by programs that are compiled with the TensorFlow library source code after (year, month, day).
15751573forward_compatibility_horizontensorflow/tensorflow/python/compat/compat.py131functionContext manager for testing forward compatibility of generated graphs. See [Version compatibility](https://tensorflow.org/guide/version_compat#backward_forward). To ensure forward compatibility of generated graphs (see `forward_compatible`) with older binaries, new features can be gated with: ```python if compat.forward_compatible(year=2018, month=08, date=01): generate_graph_with_new_features() else: generate_graph_so_older_binaries_can_consume_it() ``` However, when adding new features, one may want to unittest it before the forward compatibility window expires. This context manager enables such tests. For example: ```python from tensorflow.python.compat import compat def testMyNewFeature(self): with compat.forward_compatibility_horizon(2018, 08, 02): # Test that generate_graph_with_new_features() has an effect ``` Args: year: A year (e.g., 2018). Must be an `int`. month: A month (1 <= month <= 12) in year. Must be an `int`. day: A day (1 <= day <= 31, or 30, or 29, or 28) in month. Must be an `int`. Yields: Nothing.
15761574CompatTesttensorflow/tensorflow/python/compat/compat_test.py27class
15771575DisableV2BehaviorTesttensorflow/tensorflow/python/compat/disable_v2_behavior_test.py27class
15781576enable_v2_behaviortensorflow/tensorflow/python/compat/v2_compat.py43functionEnables TensorFlow 2.x behaviors. This function can be called at the beginning of the program (before `Tensors`, `Graphs` or other structures have been created, and before devices have been initialized. It switches all global behaviors that are different between TensorFlow 1.x and 2.x to behave as intended for 2.x. This function is called in the main TensorFlow `__init__.py` file, user should not need to call it, except during complex migrations.
15791577disable_v2_behaviortensorflow/tensorflow/python/compat/v2_compat.py82functionDisables TensorFlow 2.x behaviors. This function can be called at the beginning of the program (before `Tensors`, `Graphs` or other structures have been created, and before devices have been initialized. It switches all global behaviors that are different between TensorFlow 1.x and 2.x to behave as intended for 1.x. User can call this function to disable 2.x behavior during complex migrations.
15801578convert_graph_deftensorflow/tensorflow/python/compiler/mlir/mlir.py26functionImport a GraphDef and convert it to a textual MLIR module. Args: graph_def: An object of type graph_pb2.GraphDef or a textual proto representation of a valid GraphDef. pass_pipeline: A textual description of an MLIR Pass Pipeline to run on the module, see MLIR documentation for the [textual pass pipeline syntax](https://github.com/tensorflow/mlir/blob/master/g3doc/WritingAPass.md#textual-pass-pipeline-specification). Returns: A textual representation of the MLIR module corresponding to the graphdef. Raises a RuntimeError on error.
15811579MLIRImportTesttensorflow/tensorflow/python/compiler/mlir/mlir_test.py26class
15821580_to_bytestensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py84functionEncode s if it is a sequence of chars.
15831581_to_stringtensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py91functionDecode s if it is a sequence of bytes.
15841582TrtPrecisionModetensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py98class
15851583TrtConversionParamstensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py117classParameters that are used for TF-TRT conversion. Fields: rewriter_config_template: a template RewriterConfig proto used to create a TRT-enabled RewriterConfig. If None, it will use a default one. max_workspace_size_bytes: the maximum GPU temporary memory which the TRT engine can use at execution time. This corresponds to the 'workspaceSize' parameter of nvinfer1::IBuilder::setMaxWorkspaceSize(). precision_mode: one the strings in TrtPrecisionMode.supported_precision_modes(). minimum_segment_size: the minimum number of nodes required for a subgraph to be replaced by TRTEngineOp. is_dynamic_op: whether to generate dynamic TRT ops which will build the TRT network and engine at run time. i.e. Since TensorRT version < 6.0 does not support dynamic dimensions other than the batch dimension, when the TensorFlow graph has a non-batch dimension of dynamic size, we would need to enable this option. This option should be set to True in TF 2.0. maximum_cached_engines: max number of cached TRT engines for dynamic TRT ops. Created TRT engines for a dynamic dimension are cached. This is the maximum number of engines that can be cached. If the number of cached engines is already at max but none of them supports the input shapes, the TRTEngineOp will fall back to run the original TF subgraph that corresponds to the TRTEngineOp. use_calibration: this argument is ignored if precision_mode is not INT8. If set to True, a calibration graph will be created to calibrate the missing ranges. The calibration graph must be converted to an inference graph by running calibration with calibrate(). If set to False, quantization nodes will be expected for every tensor in the graph (excluding those which will be fused). If a range is missing, an error will occur. Please note that accuracy may be negatively affected if there is a mismatch between which tensors TRT quantizes and which tensors were trained with fake quantization. max_batch_size: max size for the input batch. This parameter is only effective when is_dynamic_op=False which is not supported in TF 2.0. allow_build_at_runtime: whether to build TensorRT engines during runtime. If no TensorRT engine can be found in cache that can handle the given inputs during runtime, then a new TensorRT engine is built at runtime if allow_build_at_runtime=True, and otherwise native TF is used. This argument is only effective if is_dynamic_op=True.
15861584_check_conversion_paramstensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py188functionValidate the provided TrtConversionParams. Args: conversion_params: a TrtConversionParams instance. is_v2: whether we're getting a RewriterConfig for TF 2.0. Raises: TypeError: if any of the parameters are of unexpected type. ValueError: if any of the parameters are of unexpected value.
15871585_check_trt_version_compatibilitytensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py252functionCheck compatibility of TensorRT version. Raises: RuntimeError: if the TensorRT library version is incompatible.
15881586get_tensorrt_rewriter_configtensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py292functionReturns a RewriterConfig proto for TRT transformation. Args: conversion_params: a TrtConversionParams instance. is_v2: whether we're getting a RewriterConfig for TF 2.0. disable_non_trt_optimizers: Turn off all default Grappler optimizers. Returns: A RewriterConfig proto which sets a TensorRTOptimizer to run Grappler. Raises: TypeError: if any of the parameters are of unexpected type. ValueError: if any of the parameters are of unexpected value.
15891587_get_canonical_engine_nametensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py383function
15901588is_explicit_batch_mode_enabledtensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py387functionChecks whether explicit batch is enabled by the rewriter config.
15911589TrtGraphConvertertensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py398classA converter for TF-TRT transformation for TF 1.x GraphDef/SavedModels. To run the conversion without quantization calibration (e.g. for FP32/FP16 precision modes): ```python converter = TrtGraphConverter( input_saved_model_dir="my_dir", precision_mode=TrtPrecisionMode.FP16) converted_graph_def = converter.convert() converter.save(output_saved_model_dir) ``` To run the conversion with quantization calibration: ```python converter = TrtGraphConverter( input_saved_model_dir="my_dir", precision_mode=TrtPrecisionMode.INT8) converter.convert() # Run calibration 10 times. converted_graph_def = converter.calibrate( fetch_names=['output:0'], num_runs=10, feed_dict_fn=lambda: {'input:0': my_next_data()}) converter.save(output_saved_model_dir) ```
15921590_get_resource_handletensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py833function
15931591_TRTEngineResourceDeletertensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py838classResource deleter for destroying TRT engine cache resource.
15941592_TRTEngineResourcetensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py853classClass to track the serialized engines resource.
15951593TrtGraphConverterV2tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py880classAn offline converter for TF-TRT transformation for TF 2.0 SavedModels. Currently this is not available on Windows platform. Note that in V2, is_dynamic_op=False is not supported, meaning TRT engines will be built only when the corresponding TRTEngineOp is executed. But we still provide a way to avoid the cost of building TRT engines during inference (see more below). There are several ways to run the conversion: 1. FP32/FP16 precision ```python params = tf.experimental.tensorrt.ConversionParams( precision_mode='FP16') converter = tf.experimental.tensorrt.Converter( input_saved_model_dir="my_dir", conversion_params=params) converter.convert() converter.save(output_saved_model_dir) ``` In this case, no TRT engines will be built or saved in the converted SavedModel. But if input data is available during conversion, we can still build and save the TRT engines to reduce the cost during inference (see option 2 below). 2. FP32/FP16 precision with pre-built engines ```python params = tf.experimental.tensorrt.ConversionParams( precision_mode='FP16', # Set this to a large enough number so it can cache all the engines. maximum_cached_engines=16) converter = tf.experimental.tensorrt.Converter( input_saved_model_dir="my_dir", conversion_params=params) converter.convert() # Define a generator function that yields input data, and use it to execute # the graph to build TRT engines. # With TensorRT 5.1, different engines will be built (and saved later) for # different input shapes to the TRTEngineOp. def my_input_fn(): for _ in range(num_runs): inp1, inp2 = ... yield inp1, inp2 converter.build(input_fn=my_input_fn) # Generate corresponding TRT engines converter.save(output_saved_model_dir) # Generated engines will be saved. ``` In this way, one engine will be built/saved for each unique input shapes of the TRTEngineOp. This is good for applications that cannot afford building engines during inference but have access to input data that is similar to the one used in production (for example, that has the same input shapes). Also, the generated TRT engines is platform dependent, so we need to run `build()` in an environment that is similar to production (e.g. with same type of GPU). 3. INT8 precision and calibration with pre-built engines ```python params = tf.experimental.tensorrt.ConversionParams( precision_mode='INT8', # Currently only one INT8 engine is supported in this mode. maximum_cached_engines=1, use_calibration=True) converter = tf.experimental.tensorrt.Converter( input_saved_model_dir="my_dir", conversion_params=params) # Define a generator function that yields input data, and run INT8 # calibration with the data. All input data should have the same shape. # At the end of convert(), the calibration stats (e.g. range information) # will be saved and can be used to generate more TRT engines with different # shapes. Also, one TRT engine will be generated (with the same shape as # the calibration data) for save later. def my_calibration_input_fn(): for _ in range(num_runs): inp1, inp2 = ... yield inp1, inp2 converter.convert(calibration_input_fn=my_calibration_input_fn) # (Optional) Generate more TRT engines offline (same as the previous # option), to avoid the cost of generating them during inference. def my_input_fn(): for _ in range(num_runs): inp1, inp2 = ... yield inp1, inp2 converter.build(input_fn=my_input_fn) # Save the TRT engine and the engines. converter.save(output_saved_model_dir) ```
15961594create_inference_graphtensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py1270functionPython wrapper for the TRT transformation. Args: input_graph_def: a GraphDef object containing a model to be transformed. If set to None, the graph will be read from the SavedModel loaded from input_saved_model_dir. outputs: list of tensors or node names for the model outputs. Only used when input_graph_def is not None. max_batch_size: max size for the input batch. max_workspace_size_bytes: the maximum GPU temporary memory which the TRT engine can use at execution time. This corresponds to the 'workspaceSize' parameter of nvinfer1::IBuilder::setMaxWorkspaceSize(). precision_mode: one of TrtPrecisionMode.supported_precision_modes(). minimum_segment_size: the minimum number of nodes required for a subgraph to be replaced by TRTEngineOp. is_dynamic_op: whether to generate dynamic TRT ops which will build the TRT network and engine at run time. maximum_cached_engines: max number of cached TRT engines in dynamic TRT ops. If the number of cached engines is already at max but none of them can serve the input, the TRTEngineOp will fall back to run the TF function based on which the TRTEngineOp is created. input_saved_model_dir: the directory to load the SavedModel which contains the input graph to transforms. Used only when input_graph_def is None. input_saved_model_tags: list of tags to load the SavedModel. input_saved_model_signature_key: the key of the signature to optimize the graph for. output_saved_model_dir: if not None, construct a SavedModel using the returned GraphDef and save it to the specified directory. This option only works when the input graph is loaded from a SavedModel, i.e. when input_saved_model_dir is specified and input_graph_def is None. session_config: the ConfigProto used to create a Session. It's also used as a template to create a TRT-enabled ConfigProto for conversion. If not specified, a default ConfigProto will be used. Returns: A GraphDef transformed from input_graph_def (or the SavedModel graph def loaded from input_saved_model_dir, if input_graph_def is not present), where all TRT compatible subgraphs are replaced with TRTEngineOps, and a TF function is added for each of the subgraphs. If is_dynamic_op is True, each TRTEngineOp will contain a serialized subgraph GraphDef, which will be converted to a TRT engine at execution time and the TRT engine will be cached for future usage. A new TRT engine will be created each time when none of the cached engines match the input shapes. If it fails to execute the TRT engine or the number of cached engines reaches maximum_cached_engines, the op will fall back to call the corresponding TF function. If is_dynamic_op is False, each TRTEngineOp will contain a serialized TRT engine created from the corresponding subgraph. No more engines will be created on the fly, and the op will fall back to call the corresponding TF function when it fails to execute the engine. Raises: ValueError: if the combination of the parameters is invalid.
15971595TrtConvertTesttensorflow/tensorflow/python/compiler/tensorrt/trt_convert_test.py67classClass to test Tensorflow-TensorRT integration python API.
15981596TrtPrecisionModetensorflow/tensorflow/python/compiler/tensorrt/trt_convert_windows.py31class
15991597TrtConversionParamstensorflow/tensorflow/python/compiler/tensorrt/trt_convert_windows.py43classParameters that are used for TF-TRT conversion. Fields: rewriter_config_template: a template RewriterConfig proto used to create a TRT-enabled RewriterConfig. If None, it will use a default one. max_workspace_size_bytes: the maximum GPU temporary memory which the TRT engine can use at execution time. This corresponds to the 'workspaceSize' parameter of nvinfer1::IBuilder::setMaxWorkspaceSize(). precision_mode: one the strings in TrtPrecisionMode.supported_precision_modes(). minimum_segment_size: the minimum number of nodes required for a subgraph to be replaced by TRTEngineOp. is_dynamic_op: whether to generate dynamic TRT ops which will build the TRT network and engine at run time. i.e. Since TensorRT version < 6.0 does not support dynamic dimensions other than the batch dimension, when the TensorFlow graph has a non-batch dimension of dynamic size, we would need to enable this option. This option should be set to True in TF 2.0. maximum_cached_engines: max number of cached TRT engines for dynamic TRT ops. Created TRT engines for a dynamic dimension are cached. This is the maximum number of engines that can be cached. If the number of cached engines is already at max but none of them supports the input shapes, the TRTEngineOp will fall back to run the original TF subgraph that corresponds to the TRTEngineOp. use_calibration: this argument is ignored if precision_mode is not INT8. If set to True, a calibration graph will be created to calibrate the missing ranges. The calibration graph must be converted to an inference graph by running calibration with calibrate(). If set to False, quantization nodes will be expected for every tensor in the graph (exlcuding those which will be fused). If a range is missing, an error will occur. Please note that accuracy may be negatively affected if there is a mismatch between which tensors TRT quantizes and which tensors were trained with fake quantization. max_batch_size: max size for the input batch. This parameter is only effective when is_dynamic_op=False which is not supported in TF 2.0.
16001598TrtConverterWindowstensorflow/tensorflow/python/compiler/tensorrt/trt_convert_windows.py97classAn offline converter for TF-TRT transformation for TF 2.0 SavedModels. Currently this is not available on Windows platform.
16011599SimpleSingleEngineTesttensorflow/tensorflow/python/compiler/tensorrt/test/base_test.py34class
16021600SimpleMultiEnginesTesttensorflow/tensorflow/python/compiler/tensorrt/test/base_test.py74class
16031601SimpleMultiEnginesTest2tensorflow/tensorflow/python/compiler/tensorrt/test/base_test.py134class
16041602ConstInputTesttensorflow/tensorflow/python/compiler/tensorrt/test/base_test.py175class
16051603ConstDataInputSingleEngineTesttensorflow/tensorflow/python/compiler/tensorrt/test/base_test.py211class
16061604ConstDataInputMultipleEnginesTesttensorflow/tensorflow/python/compiler/tensorrt/test/base_test.py232class
16071605ControlDependencyTesttensorflow/tensorflow/python/compiler/tensorrt/test/base_test.py264class
16081606BatchMatMulTwoTensorTesttensorflow/tensorflow/python/compiler/tensorrt/test/batch_matmul_test.py32classTesting conversion of BatchMatMul where both inputs are tensors.
16091607BatchMatMulWeightBroadcastTesttensorflow/tensorflow/python/compiler/tensorrt/test/batch_matmul_test.py50classTesting BatchMatMulV2: one operand is weight and both have same rank.
16101608BatchMatMulWeightBroadcastDims2Testtensorflow/tensorflow/python/compiler/tensorrt/test/batch_matmul_test.py69classTesting BatchMatMulV2: weight operand must be broadcasted.
16111609BiasaddMatMulTesttensorflow/tensorflow/python/compiler/tensorrt/test/biasadd_matmul_test.py33classTesting conversion of BiasAdd MatMul in TF-TRT conversion.
16121610BinaryTensorWeightBroadcastTesttensorflow/tensorflow/python/compiler/tensorrt/test/binary_tensor_weight_broadcast_test.py30classTests for scale & elementwise layers in TF-TRT.
16131611CastInt32ToFp32Testtensorflow/tensorflow/python/compiler/tensorrt/test/cast_test.py31classTests cast to FP32 are splitted in FP16 mode.
16141612CombinedNmsTesttensorflow/tensorflow/python/compiler/tensorrt/test/combined_nms_test.py30classTest for CombinedNMS op in TF-TRT.
16151613ConcatenationTesttensorflow/tensorflow/python/compiler/tensorrt/test/concatenation_test.py32classTesting Concatenation in TF-TRT conversion.
16161614ConstBroadcastTesttensorflow/tensorflow/python/compiler/tensorrt/test/const_broadcast_test.py28classTest for Constant broadcasting in TF-TRT.
16171615conv2d_layertensorflow/tensorflow/python/compiler/tensorrt/test/conv2d_test.py32function
16181616div_round_uptensorflow/tensorflow/python/compiler/tensorrt/test/conv2d_test.py62function
16191617build_graphtensorflow/tensorflow/python/compiler/tensorrt/test/conv2d_test.py66function
16201618Conv2DNCHWTesttensorflow/tensorflow/python/compiler/tensorrt/test/conv2d_test.py83classTesting conversion of Conv2D (data_format=NCHW) in TF-TRT conversion.
16211619Conv2DNHWCTesttensorflow/tensorflow/python/compiler/tensorrt/test/conv2d_test.py118classTesting conversion of Conv2D (data_format=NCHW) in TF-TRT conversion.
16221620Conv2DStridedNCHWTesttensorflow/tensorflow/python/compiler/tensorrt/test/conv2d_test.py141classTesting conversion of strided Conv2D (data_format=NCHW).
16231621Conv2DTranposeTesttensorflow/tensorflow/python/compiler/tensorrt/test/conv2d_test.py172classTesting conversion of conv2d_transpose (AKA Conv2DBackpropInput)
16241622DynamicInputShapesTesttensorflow/tensorflow/python/compiler/tensorrt/test/dynamic_input_shapes_test.py32class
16251623IdentityTesttensorflow/tensorflow/python/compiler/tensorrt/test/identity_output_test.py36classTesting engine with the same tensor repeated as output via identity.
16261624ExcludeUnsupportedInt32Testtensorflow/tensorflow/python/compiler/tensorrt/test/int32_test.py32classTest exclusion of ops which are not supported in INT32 mode by TF-TRT
16271625CalibrationInt32Supporttensorflow/tensorflow/python/compiler/tensorrt/test/int32_test.py68classTest execution of calibration with int32 input
16281626LRUCacheTesttensorflow/tensorflow/python/compiler/tensorrt/test/lru_cache_test.py33class
16291627MemoryAlignmentTesttensorflow/tensorflow/python/compiler/tensorrt/test/memory_alignment_test.py31classTesting conversion of BatchMatMul in TF-TRT conversion.
16301628MultiConnectionNeighborEngineTesttensorflow/tensorflow/python/compiler/tensorrt/test/multi_connection_neighbor_engine_test.py31classTest for multi connection neighboring nodes wiring tests in TF-TRT.
16311629NeighboringEngineTesttensorflow/tensorflow/python/compiler/tensorrt/test/neighboring_engine_test.py32classNeighboring node wiring tests in TF-TRT conversion.
16321630QuantizationAwareTrainingMNISTTesttensorflow/tensorflow/python/compiler/tensorrt/test/quantization_mnist_test.py59classTesting usage of quantization ranges inserted in graph.
16331631_GraphFntensorflow/tensorflow/python/compiler/tensorrt/test/quantization_test.py33function
16341632_GetParamstensorflow/tensorflow/python/compiler/tensorrt/test/quantization_test.py53function
16351633QuantizationMissingAllRangesTesttensorflow/tensorflow/python/compiler/tensorrt/test/quantization_test.py57classCreate a graph containing single segment with no quantization ranges.
16361634QuantizationWithRangesTesttensorflow/tensorflow/python/compiler/tensorrt/test/quantization_test.py82classCreate a graph containing single segment with no quantization ranges.
16371635NonQuantizedPrecisionsWithRangesTesttensorflow/tensorflow/python/compiler/tensorrt/test/quantization_test.py110classCreate a graph containing single segment with no quantization ranges.
16381636RankTwoTesttensorflow/tensorflow/python/compiler/tensorrt/test/rank_two_test.py30classTest for rank 2 input in TF-TRT.
16391637ReshapeTesttensorflow/tensorflow/python/compiler/tensorrt/test/reshape_transpose_test.py28class
16401638TransposeTesttensorflow/tensorflow/python/compiler/tensorrt/test/reshape_transpose_test.py79class
16411639IncompatibleTransposeTesttensorflow/tensorflow/python/compiler/tensorrt/test/reshape_transpose_test.py108class
16421640IsQuantizationModetensorflow/tensorflow/python/compiler/tensorrt/test/tf_trt_integration_test_base.py95function
16431641IsQuantizationWithCalibrationtensorflow/tensorflow/python/compiler/tensorrt/test/tf_trt_integration_test_base.py99function
16441642GraphStatetensorflow/tensorflow/python/compiler/tensorrt/test/tf_trt_integration_test_base.py103class
16451643TfTrtIntegrationTestBasetensorflow/tensorflow/python/compiler/tensorrt/test/tf_trt_integration_test_base.py109classClass to test Tensorflow-TensorRT integration.
16461644_GetTestConfigsV1tensorflow/tensorflow/python/compiler/tensorrt/test/tf_trt_integration_test_base.py883functionReturns the config combinations to run the test.
16471645_GetTestConfigsV2tensorflow/tensorflow/python/compiler/tensorrt/test/tf_trt_integration_test_base.py902functionReturns the config combinations to run the test.
16481646_GetTesttensorflow/tensorflow/python/compiler/tensorrt/test/tf_trt_integration_test_base.py928functionGets a single test method based on the parameters.
16491647_AddTestsFortensorflow/tensorflow/python/compiler/tensorrt/test/tf_trt_integration_test_base.py942functionAdds test methods to TfTrtIntegrationTestBase for specific TF version.
16501648_AddTeststensorflow/tensorflow/python/compiler/tensorrt/test/tf_trt_integration_test_base.py967functionAdds test methods to TfTrtIntegrationTestBase.
16511649TopKTesttensorflow/tensorflow/python/compiler/tensorrt/test/topk_test.py29classTesting Top-K in TF-TRT conversion.
16521650TopKOutputTypeTesttensorflow/tensorflow/python/compiler/tensorrt/test/topk_test.py50classTesting that output type of engine using Top-K is set correctly.
16531651TrtModeTestBasetensorflow/tensorflow/python/compiler/tensorrt/test/trt_mode_test.py31classTest squeeze on batch dim and some unary operations in TF-TRT.
16541652ImplicitBatchTesttensorflow/tensorflow/python/compiler/tensorrt/test/trt_mode_test.py81class
16551653ExplicitBatchTesttensorflow/tensorflow/python/compiler/tensorrt/test/trt_mode_test.py104class
16561654DynamicShapesTesttensorflow/tensorflow/python/compiler/tensorrt/test/trt_mode_test.py140classTest with dynamic input shapes. DynamicShapesTest is different from ExplicitBatchTest in that it uses input and output masks to change the input and output shapes to unknown shapes.
16571655UnaryTesttensorflow/tensorflow/python/compiler/tensorrt/test/unary_test.py33classTest for unary operations in TF-TRT.
16581656VGGBlockNCHWTesttensorflow/tensorflow/python/compiler/tensorrt/test/vgg_block_nchw_test.py35classSingle vgg layer in NCHW unit tests in TF-TRT.
16591657VGGBlockTesttensorflow/tensorflow/python/compiler/tensorrt/test/vgg_block_test.py35classSingle vgg layer test in TF-TRT conversion.
16601658GetGraphtensorflow/tensorflow/python/compiler/tensorrt/test/testdata/gen_tftrt_model.py49functionDefine graph.
16611659GenerateModelV2tensorflow/tensorflow/python/compiler/tensorrt/test/testdata/gen_tftrt_model.py59functionGenerate and convert a model using TFv2 API.
16621660GenerateModelV1tensorflow/tensorflow/python/compiler/tensorrt/test/testdata/gen_tftrt_model.py90functionGenerate and convert a model using TFv1 API.
16631661ExperimentalCompileTesttensorflow/tensorflow/python/compiler/xla/experimental_compile_test.py30class
16641662_XlaScopetensorflow/tensorflow/python/compiler/xla/jit.py32classKeeps track of previous XLA scope calls, and depth of current call.
16651663experimental_jit_scopetensorflow/tensorflow/python/compiler/xla/jit.py42functionEnable or disable JIT compilation of operators within the scope. NOTE: This is an experimental feature. The compilation is a hint and only supported on a best-effort basis. Example usage: ```python with tf.xla.experimental.jit_scope(): c = tf.matmul(a, b) # compiled with tf.xla.experimental.jit_scope(compile_ops=False): d = tf.matmul(a, c) # not compiled with tf.xla.experimental.jit_scope( compile_ops=lambda node_def: 'matmul' in node_def.op.lower()): e = tf.matmul(a, b) + d # matmul is compiled, the addition is not. ``` Example of `separate_compiled_gradients`: ```python # In the example below, the computations for f, g and h will all be compiled # in separate scopes. with tf.xla.experimental.jit_scope( separate_compiled_gradients=True): f = tf.matmul(a, b) g = tf.gradients([f], [a, b], name='mygrads1') h = tf.gradients([f], [a, b], name='mygrads2') ``` Args: compile_ops: Whether to enable or disable compilation in the scope. Either a Python bool, or a callable that accepts the parameter `node_def` and returns a python bool. separate_compiled_gradients: If true put each gradient subgraph into a separate compilation scope. This gives fine-grained control over which portions of the graph will be compiled as a single unit. Compiling gradients separately may yield better performance for some graphs. The scope is named based on the scope of the forward computation as well as the name of the gradients. As a result, the gradients will be compiled in a scope that is separate from both the forward computation, and from other gradients. Raises: RuntimeError: if called when eager execution is enabled. Yields: The current scope, enabling or disabling compilation.
16661664enable_jit_nonstatefultensorflow/tensorflow/python/compiler/xla/jit_test.py39function
16671665JITTesttensorflow/tensorflow/python/compiler/xla/jit_test.py47class
16681666CompilationEnabledInGradientTesttensorflow/tensorflow/python/compiler/xla/jit_test.py187class
16691667compiletensorflow/tensorflow/python/compiler/xla/xla.py67functionBuilds an operator that compiles and runs `computation` with XLA. NOTE: In eager mode, `computation` will have `@tf.function` semantics. Args: computation: A Python function that builds a computation to apply to the input. If the function takes n inputs, 'inputs' should be a list of n tensors. `computation` may return a list of operations and tensors. Tensors must come before operations in the returned list. The return value of `compile` is a list of tensors corresponding to the tensors from the output of `computation`. All `Operation`s returned from `computation` will be executed when evaluating any of the returned output tensors. inputs: A list of inputs or `None` (equivalent to an empty list). Each input can be a nested structure containing values that are convertible to tensors. Note that passing an N-dimension list of compatible values will result in a N-dimension list of scalar tensors rather than a single Rank-N tensors. If you need different behavior, convert part of inputs to tensors with `tf.convert_to_tensor`. Returns: Same data structure as if computation(*inputs) is called directly with some exceptions for correctness. Exceptions include: 1) None output: a NoOp would be returned which control-depends on computation. 2) Single value output: A tuple containing the value would be returned. 3) Operation-only outputs: a NoOp would be returned which control-depends on computation. TODO(b/121383831): Investigate into removing these special cases. Raises: RuntimeError: if called when eager execution is enabled. Known issues: When a tf.random operation is built with XLA, the implementation doesn't pass the user provided seed to the XLA compiler. As such, the XLA compiler generates a random number and uses it as a seed when compiling the operation. This implementation causes a violation of the Tensorflow defined semantics in two aspects. First, changing the value of the user defined seed doesn't change the numbers generated by the operation. Second, when a seed is not specified, running the program multiple times will generate the same numbers.
16701668XLACompileContexttensorflow/tensorflow/python/compiler/xla/xla.py125classA `ControlFlowContext` for nodes inside an XLA computation cluster. THIS IS ONLY FOR TENSORFLOW INTERNAL IMPLEMENTATION, DO NO USE DIRECTLY. The primary role of `XLACompileContext` is to mark operators inside a xla.compile() computation with attribute "_xla_compile_id=XYZ", where XYZ is a unique name. `ControlFlowContext` is used to perform the annotation since it integrates with Tensorflow constructs like ResourceVariables. For example, if a `ResourceVariable` is constructed inside a xla.compile() block, the `ResourceVariable` implementation can use `with ops.control_dependencies(None)` to build the variable's definition outside the compiled computation.
16711669_compile_internaltensorflow/tensorflow/python/compiler/xla/xla.py306functionBuilds graph operators that compiles and symbolically executes computation. Args: computation: A Python function that builds the computation to compile and execute. inputs: A list of inputs or `None` (equivalent to an empty list). Each input can be a nested structure containing values that are convertible to tensors. Note that passing an N-dimension list of compatible values will result in a N-dimension list of scalar tensors rather than a single Rank-N tensors. If you need different behavior, convert part of inputs to tensors with `tf.convert_to_tensor`. Returns: Same data structure as if computation(*inputs) is called directly with some exceptions for correctness. Exceptions include: 1) None output 2) Single value output 3) Operation-only outputs Raises: ValueError: If any element in computation outputs is neither an operations or a value that can be converted to tensor. ValueError: If computation outputs is non-flat and contains any Operations. TypeError: If `inputs` is not a list or tuple.
16721670is_flattensorflow/tensorflow/python/compiler/xla/xla.py409functionChecks if outputs is a flat structure. Following structures and values are considered flat: 1) None 2) A single object 3) A list or tuple of Tensors/Operations The only structures that this function understands are sequences, dictionaries and types defined using the attrs library. E.g. this means that if outputs contains a single user-defined Object, it is considered to be flat. Errors are raised later on if that Object cannot be converted to a Tensor. Args: outputs: Output from `computation` inside `xla.compile`. Returns: A boolean indicates whether outputs is flat.
16731671_postprocess_flat_outputstensorflow/tensorflow/python/compiler/xla/xla.py451functionValidates flat outputs and adds back device assignments. Args: outputs: Output from `computation` inside `xla.compile`. Returns: Tensors and Operations extracted from outputs.
16741672_postprocess_non_flat_outputstensorflow/tensorflow/python/compiler/xla/xla.py503functionValidates non-flat outputs and adds back device assignments. Args: outputs: Output from `computation` inside `xla.compile`. Returns: Tensors extracted from outputs and an empty list because Operations are not allowed in non-flat outputs..
16751673_disable_summary_contexttensorflow/tensorflow/python/compiler/xla/xla.py539functionEnters a context where all summary ops are skipped. Summaries are not yet supported in xla.compile(). So we provide this context manager that can skip creating summary ops. This is a temporary workaround due to XLA not supporting summary ops. Yields: None.
16761674_CapturedObjecttensorflow/tensorflow/python/compiler/xla/xla.py558classA placeholder to capture an object.
16771675_get_scaffoldtensorflow/tensorflow/python/compiler/xla/xla.py576functionRetrieves the Scaffold from `captured_scaffold_fn`.
16781676check_function_argument_counttensorflow/tensorflow/python/compiler/xla/xla.py591functionValidate the number of input arguments to an XLA function. Args: func: the Python function that will be called to generate the body of an XLA computation graph. input_arity: the number of explicit arguments supplied by the caller. infeed_queue: if not None, the infeed queue that will supply additional arguments to the function. Returns: None if function can be called with the supplied number of arguments, or an error string if it cannot.
16791677XLACompileContextTesttensorflow/tensorflow/python/compiler/xla/xla_test.py47class
16801678XlaCompileTesttensorflow/tensorflow/python/compiler/xla/xla_test.py217class
16811679CheckFunctionArgumentCountTesttensorflow/tensorflow/python/compiler/xla/xla_test.py260class
16821680BatchBenchmarktensorflow/tensorflow/python/data/benchmarks/batch_benchmark.py27classBenchmarks for `tf.data.Dataset.batch()`.
16831681DatasetBenchmarkBasetensorflow/tensorflow/python/data/benchmarks/benchmark_base.py31classBase class for dataset benchmarks.
16841682FilterBenchmarktensorflow/tensorflow/python/data/benchmarks/filter_benchmark.py26classBenchmarks for `tf.data.Dataset.filter()`.
16851683SingleThreadedFlatMapDatasettensorflow/tensorflow/python/data/benchmarks/from_tensor_slices_benchmark.py30classA `Dataset` that maps a function over its input and flattens the result.
16861684FromTensorSlicesBenchmarktensorflow/tensorflow/python/data/benchmarks/from_tensor_slices_benchmark.py62classBenchmarks for `tf.data.Dataset.from_tensor_slices()`.
16871685ListFilesBenchmarktensorflow/tensorflow/python/data/benchmarks/list_files_benchmark.py35classBenchmarks for `tf.data.Dataset.list_files()`.
16881686MapBenchmarktensorflow/tensorflow/python/data/benchmarks/map_benchmark.py32classBenchmarks for `tf.data.Dataset.map()`.
16891687MetaBenchmarktensorflow/tensorflow/python/data/benchmarks/meta_benchmark.py31classBenchmark that compares various ways of running tf.data benchmarks.
16901688PrefetchBenchmarktensorflow/tensorflow/python/data/benchmarks/prefetch_benchmark.py24classBenchmarks for `tf.data.Dataset.prefetch()`.
16911689RangeBenchmarktensorflow/tensorflow/python/data/benchmarks/range_benchmark.py24classBenchmarks for `tf.data.Dataset.range()`.
16921690AutotuneBenchmarktensorflow/tensorflow/python/data/experimental/benchmarks/autotune_benchmark.py31classBenchmarks for autotuning performance knobs.
16931691ChooseFastestBenchmarktensorflow/tensorflow/python/data/experimental/benchmarks/choose_fastest_benchmark.py31classBenchmarks for static optimizations.
16941692ChooseFastestBranchBenchmarktensorflow/tensorflow/python/data/experimental/benchmarks/choose_fastest_branch_benchmark.py26classBenchmarks for ChooseFastestBranchDatast.
16951693CsvDatasetBenchmarktensorflow/tensorflow/python/data/experimental/benchmarks/csv_dataset_benchmark.py38classBenchmarks for `tf.data.experimental.CsvDataset`.
16961694MapAndBatchBenchmarktensorflow/tensorflow/python/data/experimental/benchmarks/map_and_batch_benchmark.py40classBenchmarks for `tf.data.experimental.map_and_batch()`.
16971695MapDefunBenchmarktensorflow/tensorflow/python/data/experimental/benchmarks/map_defun_benchmark.py34classBenchmarks for MapDefunOp.
16981696_generate_csv_test_casetensorflow/tensorflow/python/data/experimental/benchmarks/map_vectorization_benchmark.py37functionGenerates a `decode_csv()` test case.
16991697_generate_parse_single_example_test_casetensorflow/tensorflow/python/data/experimental/benchmarks/map_vectorization_benchmark.py57functionGenerates a `parse_single_example()` test case.
17001698MapVectorizationBenchmarktensorflow/tensorflow/python/data/experimental/benchmarks/map_vectorization_benchmark.py97classBenchmarks for the `MapVectorization` optimization.
17011699MatchingFilesBenchmarktensorflow/tensorflow/python/data/experimental/benchmarks/matching_files_benchmark.py35classBenchmark for the experimental `MatchingFilesDataset`.
17021700OptimizationBenchmarktensorflow/tensorflow/python/data/experimental/benchmarks/optimize_benchmark.py32classBenchmarks for static optimizations.
17031701_make_fake_dataset_fntensorflow/tensorflow/python/data/experimental/benchmarks/parallel_interleave_benchmark.py36functionReturns a dataset that emulates a remote storage data source. Returns a dataset factory which creates a dataset with 100 elements that emulates the performance characteristic of a file-based dataset stored in a remote storage. In particular, the first element will take an order of magnitude longer to produce than the remaining elements (100ms vs. 1ms). Args: initial_delay_us: How long to wait before producing the first element. remainder_delay_us: How long to wait before producing subsequent elements.
17041702ParallelInterleaveBenchmarktensorflow/tensorflow/python/data/experimental/benchmarks/parallel_interleave_benchmark.py68classBenchmarks for `tf.data.experimental.parallel_interleave()`.
17051703_time_resamplingtensorflow/tensorflow/python/data/experimental/benchmarks/rejection_resample_benchmark.py31function
17061704RejectionResampleBenchmarktensorflow/tensorflow/python/data/experimental/benchmarks/rejection_resample_benchmark.py56classBenchmarks for `tf.data.experimental.rejection_resample()`.
17071705SnapshotDatasetBenchmarktensorflow/tensorflow/python/data/experimental/benchmarks/snapshot_dataset_benchmark.py34classBenchmarks for `tf.data.experimental.snapshot()`.
17081706UnbatchBenchmarktensorflow/tensorflow/python/data/experimental/benchmarks/unbatch_benchmark.py32classBenchmarks for `tf.data.Dataset.unbatch()`.
17091707AssertCardinalityTesttensorflow/tensorflow/python/data/experimental/kernel_tests/assert_cardinality_test.py30classTests for `tf.data.experimental.assert_cardinality()`.
17101708AssertNextTesttensorflow/tensorflow/python/data/experimental/kernel_tests/assert_next_test.py30class
17111709chunktensorflow/tensorflow/python/data/experimental/kernel_tests/auto_shard_dataset_test.py46function
17121710AutoShardDatasetTesttensorflow/tensorflow/python/data/experimental/kernel_tests/auto_shard_dataset_test.py51class
17131711AutoShardTextLineDatasetTesttensorflow/tensorflow/python/data/experimental/kernel_tests/auto_shard_dataset_test.py509class
17141712_element_length_fntensorflow/tensorflow/python/data/experimental/kernel_tests/bucket_by_sequence_length_test.py37function
17151713_to_sparse_tensortensorflow/tensorflow/python/data/experimental/kernel_tests/bucket_by_sequence_length_test.py42function
17161714_format_recordtensorflow/tensorflow/python/data/experimental/kernel_tests/bucket_by_sequence_length_test.py46function
17171715_get_record_typetensorflow/tensorflow/python/data/experimental/kernel_tests/bucket_by_sequence_length_test.py56function
17181716_get_record_shapetensorflow/tensorflow/python/data/experimental/kernel_tests/bucket_by_sequence_length_test.py66function
17191717BucketBySequenceLengthTesttensorflow/tensorflow/python/data/experimental/kernel_tests/bucket_by_sequence_length_test.py76class
17201718_test_objectstensorflow/tensorflow/python/data/experimental/kernel_tests/compression_ops_test.py31function
17211719CompressionOpsTesttensorflow/tensorflow/python/data/experimental/kernel_tests/compression_ops_test.py53class
17221720CopyToDeviceTesttensorflow/tensorflow/python/data/experimental/kernel_tests/copy_to_device_test.py40class
17231721CounterTesttensorflow/tensorflow/python/data/experimental/kernel_tests/counter_test.py30class
17241722CsvDatasetTesttensorflow/tensorflow/python/data/experimental/kernel_tests/csv_dataset_test.py40class
17251723_make_scalar_dstensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_ragged_batch_test.py38functionCreate a test dataset with scalar elements.
17261724_make_vector_dstensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_ragged_batch_test.py43functionCreate a test dataset with vector elements (of varying size).
17271725_make_matrix_ds1tensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_ragged_batch_test.py48functionCreate a test dataset with matrix elements (of varying size).
17281726_make_matrix_ds2tensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_ragged_batch_test.py53functionCreate a test dataset with matrix elements (of varying size).
17291727_make_matrix_ds_fully_definedtensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_ragged_batch_test.py58functionCreate a test dataset with matrix elements (of varying size).
17301728_make_5dtensor_dstensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_ragged_batch_test.py63functionCreate a test dataset with matrix elements (of varying size).
17311729_make_ragged_dstensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_ragged_batch_test.py69functionCreate a test dataset with RaggedTensor elements (of varying size).
17321730_make_dict_dstensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_ragged_batch_test.py76functionCreate a test set with various element shapes.
17331731_make_tuple_dstensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_ragged_batch_test.py89functionCreate a test set with various element shapes.
17341732_to_listtensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_ragged_batch_test.py98function
17351733RaggedBatchTesttensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_ragged_batch_test.py102class
17361734DenseToSparseBatchTesttensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_sparse_batch_test.py32class
17371735DirectedInterleaveDatasetTesttensorflow/tensorflow/python/data/experimental/kernel_tests/directed_interleave_dataset_test.py34class
17381736GetSingleElementTesttensorflow/tensorflow/python/data/experimental/kernel_tests/get_single_element_test.py34class
17391737GroupByReducerTesttensorflow/tensorflow/python/data/experimental/kernel_tests/group_by_reducer_test.py37class
17401738GroupByWindowTesttensorflow/tensorflow/python/data/experimental/kernel_tests/group_by_window_test.py41class
17411739IgnoreErrorsTesttensorflow/tensorflow/python/data/experimental/kernel_tests/ignore_errors_test.py40class
17421740IOTesttensorflow/tensorflow/python/data/experimental/kernel_tests/io_test.py32class
17431741MakeBatchedFeaturesDatasetTesttensorflow/tensorflow/python/data/experimental/kernel_tests/make_batched_features_dataset_test.py38class
17441742MakeCsvDatasetTesttensorflow/tensorflow/python/data/experimental/kernel_tests/make_csv_dataset_test.py38class
17451743MakeTFRecordDatasetTesttensorflow/tensorflow/python/data/experimental/kernel_tests/make_tf_record_dataset_test.py33class
17461744MapAndBatchTesttensorflow/tensorflow/python/data/experimental/kernel_tests/map_and_batch_test.py43class
17471745_test_combinationstensorflow/tensorflow/python/data/experimental/kernel_tests/map_defun_op_test.py44function
17481746MapDefunTesttensorflow/tensorflow/python/data/experimental/kernel_tests/map_defun_op_test.py48class
17491747MatchingFilesDatasetTesttensorflow/tensorflow/python/data/experimental/kernel_tests/matching_files_test.py34class
17501748ModelDatasetTesttensorflow/tensorflow/python/data/experimental/kernel_tests/model_dataset_test.py30class
17511749NonSerializableTesttensorflow/tensorflow/python/data/experimental/kernel_tests/non_serializable_test.py29class
17521750_captured_refvar_test_combinationstensorflow/tensorflow/python/data/experimental/kernel_tests/optimize_dataset_test.py44function
17531751OptimizeDatasetTesttensorflow/tensorflow/python/data/experimental/kernel_tests/optimize_dataset_test.py106class
17541752OverrideThreadpoolTesttensorflow/tensorflow/python/data/experimental/kernel_tests/override_threadpool_test.py38class
17551753ParallelInterleaveTesttensorflow/tensorflow/python/data/experimental/kernel_tests/parallel_interleave_test.py42class
17561754ParseExampleDatasetTesttensorflow/tensorflow/python/data/experimental/kernel_tests/parse_example_dataset_test.py54class
17571755PrefetchToDeviceTesttensorflow/tensorflow/python/data/experimental/kernel_tests/prefetch_to_device_test.py37class
17581756PrefetchWithSlackTesttensorflow/tensorflow/python/data/experimental/kernel_tests/prefetch_with_slack_test.py33class
17591757RandomDatasetTesttensorflow/tensorflow/python/data/experimental/kernel_tests/random_dataset_test.py29class
17601758FixedLengthRecordDatasetTestBasetensorflow/tensorflow/python/data/experimental/kernel_tests/reader_dataset_ops_test_base.py36classBase class for setting up and testing FixedLengthRecordDataset.
17611759MakeBatchedFeaturesDatasetTestBasetensorflow/tensorflow/python/data/experimental/kernel_tests/reader_dataset_ops_test_base.py63classBase class for setting up and testing `make_batched_features_dataset`.
17621760TextLineDatasetTestBasetensorflow/tensorflow/python/data/experimental/kernel_tests/reader_dataset_ops_test_base.py271classBase class for setting up and testing TextLineDataset.
17631761TFRecordDatasetTestBasetensorflow/tensorflow/python/data/experimental/kernel_tests/reader_dataset_ops_test_base.py311classBase class for setting up and testing TFRecordDataset.
17641762BatchSizesForWorkerTesttensorflow/tensorflow/python/data/experimental/kernel_tests/rebatch_dataset_test.py35class
17651763_flat_shapestensorflow/tensorflow/python/data/experimental/kernel_tests/rebatch_dataset_test.py113function
17661764RebatchDatasetTesttensorflow/tensorflow/python/data/experimental/kernel_tests/rebatch_dataset_test.py120class
17671765LegacyRebatchDatasetTesttensorflow/tensorflow/python/data/experimental/kernel_tests/rebatch_dataset_test.py325class
17681766ComputeBatchSizeTesttensorflow/tensorflow/python/data/experimental/kernel_tests/rebatch_dataset_test.py500class
17691767RejectionResampleTesttensorflow/tensorflow/python/data/experimental/kernel_tests/rejection_resample_test.py36class
17701768LocalReplicateTesttensorflow/tensorflow/python/data/experimental/kernel_tests/replicate_test.py44class
17711769_get_server_deftensorflow/tensorflow/python/data/experimental/kernel_tests/replicate_test.py225functionReturns a server def with a single job + multiple tasks.
17721770EagerClusterReplicateTesttensorflow/tensorflow/python/data/experimental/kernel_tests/replicate_test.py245class
17731771GraphClusterReplicateTesttensorflow/tensorflow/python/data/experimental/kernel_tests/replicate_test.py327class
17741772ScanTesttensorflow/tensorflow/python/data/experimental/kernel_tests/scan_test.py46class
17751773ShuffleAndRepeatTesttensorflow/tensorflow/python/data/experimental/kernel_tests/shuffle_and_repeat_test.py32class
17761774SleepTesttensorflow/tensorflow/python/data/experimental/kernel_tests/sleep_test.py32class
17771775SnapshotDatasetTesttensorflow/tensorflow/python/data/experimental/kernel_tests/snapshot_test.py40class
17781776LegacySnapshotDatasetTesttensorflow/tensorflow/python/data/experimental/kernel_tests/snapshot_test.py318class
17791777SqlDatasetTesttensorflow/tensorflow/python/data/experimental/kernel_tests/sql_dataset_test.py32class
17801778SqlDatasetTestBasetensorflow/tensorflow/python/data/experimental/kernel_tests/sql_dataset_test_base.py30classBase class for setting up and testing SqlDataset.
17811779StatsDatasetTesttensorflow/tensorflow/python/data/experimental/kernel_tests/stats_dataset_ops_test.py39class
17821780ThreadUtilizationStatsTesttensorflow/tensorflow/python/data/experimental/kernel_tests/stats_dataset_ops_test.py334class
17831781FeatureStatsDatasetTesttensorflow/tensorflow/python/data/experimental/kernel_tests/stats_dataset_ops_test.py399class
17841782StatsDatasetTestBasetensorflow/tensorflow/python/data/experimental/kernel_tests/stats_dataset_test_base.py37classBase class for testing statistics gathered in `StatsAggregator`.
17851783_events_from_filetensorflow/tensorflow/python/data/experimental/kernel_tests/stats_dataset_test_base.py311functionReturns all events in a single event file. Args: filepath: Path to the event file. Returns: A list of all tf.Event protos in the event file.
17861784_events_from_logdirtensorflow/tensorflow/python/data/experimental/kernel_tests/stats_dataset_test_base.py329functionReturns all events in the single eventfile in logdir. Args: logdir: The directory in which the single event file is sought. Returns: A list of all tf.Event protos from the single event file. Raises: AssertionError: If logdir does not contain exactly one file.
17871785TakeWhileTesttensorflow/tensorflow/python/data/experimental/kernel_tests/take_while_test.py34class
17881786TFRecordWriterTesttensorflow/tensorflow/python/data/experimental/kernel_tests/tf_record_writer_test.py39class
17891787UniqueTesttensorflow/tensorflow/python/data/experimental/kernel_tests/unique_test.py32class
17901788VariantTesttensorflow/tensorflow/python/data/experimental/kernel_tests/variant_test.py28class
17911789WrapDatasetVariantTesttensorflow/tensorflow/python/data/experimental/kernel_tests/wrap_unwrap_test.py31class
17921790ChooseFastestBranchDatasetTesttensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/choose_fastest_branch_dataset_test.py34class
17931791ChooseFastestDatasetTesttensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/choose_fastest_dataset_test.py31class
17941792_test_combinationstensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/filter_fusion_test.py34function
17951793FilterFusionTesttensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/filter_fusion_test.py62class
17961794FilterWithRandomUniformFusionTesttensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/filter_with_random_uniform_fusion_test.py30class
17971795GrapplerTesttensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/grappler_test.py37class
17981796_test_combinationstensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/hoist_random_uniform_test.py38function
17991797HoistRandomUniformTesttensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/hoist_random_uniform_test.py68class
18001798InjectPrefetchTesttensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/inject_prefetch_test.py29class
18011799LatencyAllEdgesTesttensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/latency_all_edges_test.py31class
18021800MapAndBatchFusionTesttensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_and_batch_fusion_test.py29class
18031801_test_combinationstensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_and_filter_fusion_test.py34function
18041802MapAndFilterFusionTesttensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_and_filter_fusion_test.py77class
18051803_test_combinationstensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_fusion_test.py34function
18061804MapFusionTesttensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_fusion_test.py66class
18071805_test_combinationstensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_parallelization_test.py37function
18081806MapParallelizationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_parallelization_test.py58class
18091807_generate_test_combinationstensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_vectorization_test.py51function
18101808_unary_bitwise_test_combinationstensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_vectorization_test.py60function
18111809_unary_logical_test_combinationstensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_vectorization_test.py65function
18121810_unary_complex_test_combinationstensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_vectorization_test.py70function
18131811_unary_real_test_combinationstensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_vectorization_test.py81function
18141812_binary_bitwise_test_combinationstensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_vectorization_test.py135function
18151813_binary_logical_test_combinationstensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_vectorization_test.py144function
18161814_binary_real_test_combinationstensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_vectorization_test.py150function
18171815MapVectorizationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_vectorization_test.py192class
18181816_test_combinationstensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/noop_elimination_test.py34function
18191817NoopEliminationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/noop_elimination_test.py90class
18201818ReorderDataDiscardingOpsTesttensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/reorder_data_discarding_ops_test.py29class
18211819ShuffleAndRepeatFusionTesttensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/shuffle_and_repeat_fusion_test.py30class
18221820AssertCardinalityDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/assert_cardinality_dataset_serialization_test.py30class
18231821AutoShardDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/auto_shard_dataset_serialization_test.py36class
18241822BatchDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/batch_dataset_serialization_test.py33class
18251823CacheDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/cache_dataset_serialization_test.py32class
18261824_test_combinationstensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/checkpoint_input_pipeline_hook_test.py40function
18271825CheckpointInputPipelineHookTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/checkpoint_input_pipeline_hook_test.py44class
18281826ChooseFastestBranchDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/choose_fastest_branch_dataset_serialization_test.py33class
18291827ChooseFastestDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/choose_fastest_dataset_serialization_test.py30class
18301828ConcatenateDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/concatenate_dataset_serialization_test.py30class
18311829CsvDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/csv_dataset_serialization_test.py32class
18321830FromTensorsSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/dataset_constructor_serialization_test.py31class
18331831FromTensorSlicesSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/dataset_constructor_serialization_test.py49class
18341832FromSparseTensorSlicesSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/dataset_constructor_serialization_test.py71class
18351833remove_variantstensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/dataset_serialization_test_base.py41functionRemove variants from a nest structure, so sess.run will execute.
18361834DatasetSerializationTestBasetensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/dataset_serialization_test_base.py55classBase class for testing serializable datasets.
18371835FilterDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/filter_dataset_serialization_test.py31class
18381836FixedLengthRecordDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/fixed_length_record_dataset_serialization_test.py30class
18391837FlatMapDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/flat_map_dataset_serialization_test.py38class
18401838GroupByReducerSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/group_by_reducer_serialization_test.py31class
18411839GroupByWindowSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/group_by_window_serialization_test.py31class
18421840IgnoreErrorsSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/ignore_errors_serialization_test.py31class
18431841InterleaveDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/interleave_dataset_serialization_test.py32class
18441842MapAndBatchDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/map_and_batch_dataset_serialization_test.py34class
18451843MapDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/map_dataset_serialization_test.py38class
18461844MatchingFilesDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/matching_files_dataset_serialization_test.py33class
18471845OptimizeDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/optimize_dataset_serialization_test.py30class
18481846PaddedBatchDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/padded_batch_dataset_serialization_test.py32class
18491847ParallelInterleaveDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/parallel_interleave_dataset_serialization_test.py33class
18501848ParallelMapDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/parallel_map_dataset_serialization_test.py37class
18511849ParseExampleDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/parse_example_dataset_serialization_test.py29class
18521850PrefetchDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/prefetch_dataset_serialization_test.py29class
18531851RangeDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/range_dataset_serialization_test.py38class
18541852LegacyRebatchDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/rebatch_dataset_serialization_test.py30class
18551853RebatchDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/rebatch_dataset_serialization_test.py46class
18561854SampleFromDatasetsSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/sample_from_datasets_serialization_test.py30class
18571855ScanDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/scan_dataset_serialization_test.py30class
18581856SkipDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/sequence_dataset_serialization_test.py30class
18591857TakeDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/sequence_dataset_serialization_test.py61class
18601858RepeatDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/sequence_dataset_serialization_test.py91class
18611859SerializationIntegrationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/serialization_integration_test.py33class
18621860ShardDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/shard_dataset_serialization_test.py29class
18631861ShuffleAndRepeatSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/shuffle_and_repeat_dataset_serialization_test.py30class
18641862ShuffleDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/shuffle_dataset_serialization_test.py32class
18651863SnapshotDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/snapshot_dataset_serialization_test.py33class
18661864LegacySnapshotDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/snapshot_dataset_serialization_test.py124class
18671865SqlDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/sql_dataset_serialization_test.py34class
18681866StatsDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/stats_dataset_serialization_test.py36class
18691867TakeWhileDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/take_while_dataset_serialization_test.py30class
18701868TextLineDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/textline_dataset_serialization_test.py30class
18711869TFRecordDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/tf_record_dataset_serialization_test.py34class
18721870UnbatchDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/unbatch_dataset_serialization_test.py30class
18731871UniqueDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/unique_dataset_serialization_test.py30class
18741872ZipDatasetSerializationTesttensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/zip_dataset_serialization_test.py30class
18751873dense_to_ragged_batchtensorflow/tensorflow/python/data/experimental/ops/batching.py36functionA transformation that batches ragged elements into `tf.RaggedTensor`s. This transformation combines multiple consecutive elements of the input dataset into a single element. Like `tf.data.Dataset.batch`, the components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced. Unlike `tf.data.Dataset.batch`, the input elements to be batched may have different shapes: * If an input element is a `tf.Tensor` whose static `tf.TensorShape` is fully defined, then it is batched as normal. * If an input element is a `tf.Tensor` whose static `tf.TensorShape` contains one or more axes with unknown size (i.e., `shape[i]=None`), then the output will contain a `tf.RaggedTensor` that is ragged up to any of such dimensions. * If an input element is a `tf.RaggedTensor` or any other type, then it is batched as normal. Example: >>> dataset = tf.data.Dataset.from_tensor_slices(np.arange(6)) >>> dataset = dataset.map(lambda x: tf.range(x)) >>> dataset.element_spec.shape TensorShape([None]) >>> dataset = dataset.apply( ... tf.data.experimental.dense_to_ragged_batch(batch_size=2)) >>> for batch in dataset: ... print(batch) <tf.RaggedTensor [[], [0]]> <tf.RaggedTensor [[0, 1], [0, 1, 2]]> <tf.RaggedTensor [[0, 1, 2, 3], [0, 1, 2, 3, 4]]> Args: batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of consecutive elements of this dataset to combine in a single batch. drop_remainder: (Optional.) A `tf.bool` scalar `tf.Tensor`, representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. row_splits_dtype: The dtype that should be used for the `row_splits` of any new ragged tensors. Existing `tf.RaggedTensor` elements do not have their row_splits dtype changed. Returns: Dataset: A `Dataset`.
18761874dense_to_sparse_batchtensorflow/tensorflow/python/data/experimental/ops/batching.py102functionA transformation that batches ragged elements into `tf.sparse.SparseTensor`s. Like `Dataset.padded_batch()`, this transformation combines multiple consecutive elements of the dataset, which might have different shapes, into a single element. The resulting element has three components (`indices`, `values`, and `dense_shape`), which comprise a `tf.sparse.SparseTensor` that represents the same data. The `row_shape` represents the dense shape of each row in the resulting `tf.sparse.SparseTensor`, to which the effective batch size is prepended. For example: ```python # NOTE: The following examples use `{ ... }` to represent the # contents of a dataset. a = { ['a', 'b', 'c'], ['a', 'b'], ['a', 'b', 'c', 'd'] } a.apply(tf.data.experimental.dense_to_sparse_batch( batch_size=2, row_shape=[6])) == { ([[0, 0], [0, 1], [0, 2], [1, 0], [1, 1]], # indices ['a', 'b', 'c', 'a', 'b'], # values [2, 6]), # dense_shape ([[0, 0], [0, 1], [0, 2], [0, 3]], ['a', 'b', 'c', 'd'], [1, 6]) } ``` Args: batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of consecutive elements of this dataset to combine in a single batch. row_shape: A `tf.TensorShape` or `tf.int64` vector tensor-like object representing the equivalent dense shape of a row in the resulting `tf.sparse.SparseTensor`. Each element of this dataset must have the same rank as `row_shape`, and must have size less than or equal to `row_shape` in each dimension. Returns: A `Dataset` transformation function, which can be passed to `tf.data.Dataset.apply`.
18771875map_and_batch_with_legacy_functiontensorflow/tensorflow/python/data/experimental/ops/batching.py153functionFused implementation of `map` and `batch`. NOTE: This is an escape hatch for existing uses of `map_and_batch` that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to `map_and_batch` as this method will not be removed in V2. Args: map_func: A function mapping a nested structure of tensors to another nested structure of tensors. batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of consecutive elements of this dataset to combine in a single batch. num_parallel_batches: (Optional.) A `tf.int64` scalar `tf.Tensor`, representing the number of batches to create in parallel. On one hand, higher values can help mitigate the effect of stragglers. On the other hand, higher values can increase contention if CPU is scarce. drop_remainder: (Optional.) A `tf.bool` scalar `tf.Tensor`, representing whether the last batch should be dropped in case its size is smaller than desired; the default behavior is not to drop the smaller batch. num_parallel_calls: (Optional.) A `tf.int32` scalar `tf.Tensor`, representing the number of elements to process in parallel. If not specified, `batch_size * num_parallel_batches` elements will be processed in parallel. If the value `tf.data.experimental.AUTOTUNE` is used, then the number of parallel calls is set dynamically based on available CPU. Returns: A `Dataset` transformation function, which can be passed to `tf.data.Dataset.apply`. Raises: ValueError: If both `num_parallel_batches` and `num_parallel_calls` are specified.
18781876map_and_batchtensorflow/tensorflow/python/data/experimental/ops/batching.py213functionFused implementation of `map` and `batch`. Maps `map_func` across `batch_size` consecutive elements of this dataset and then combines them into a batch. Functionally, it is equivalent to `map` followed by `batch`. This API is temporary and deprecated since input pipeline optimization now fuses consecutive `map` and `batch` operations automatically. Args: map_func: A function mapping a nested structure of tensors to another nested structure of tensors. batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of consecutive elements of this dataset to combine in a single batch. num_parallel_batches: (Optional.) A `tf.int64` scalar `tf.Tensor`, representing the number of batches to create in parallel. On one hand, higher values can help mitigate the effect of stragglers. On the other hand, higher values can increase contention if CPU is scarce. drop_remainder: (Optional.) A `tf.bool` scalar `tf.Tensor`, representing whether the last batch should be dropped in case its size is smaller than desired; the default behavior is not to drop the smaller batch. num_parallel_calls: (Optional.) A `tf.int32` scalar `tf.Tensor`, representing the number of elements to process in parallel. If not specified, `batch_size * num_parallel_batches` elements will be processed in parallel. If the value `tf.data.experimental.AUTOTUNE` is used, then the number of parallel calls is set dynamically based on available CPU. Returns: A `Dataset` transformation function, which can be passed to `tf.data.Dataset.apply`. Raises: ValueError: If both `num_parallel_batches` and `num_parallel_calls` are specified.
18791877unbatchtensorflow/tensorflow/python/data/experimental/ops/batching.py269functionSplits elements of a dataset into multiple elements on the batch dimension. For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, where `B` may vary for each input element, then for each element in the dataset, the unbatched dataset will contain `B` consecutive elements of shape `[a0, a1, ...]`. ```python # NOTE: The following example uses `{ ... }` to represent the contents # of a dataset. a = { ['a', 'b', 'c'], ['a', 'b'], ['a', 'b', 'c', 'd'] } a.unbatch() == { 'a', 'b', 'c', 'a', 'b', 'a', 'b', 'c', 'd'} ``` Returns: A `Dataset` transformation function, which can be passed to `tf.data.Dataset.apply`.
18801878_DenseToSparseBatchDatasettensorflow/tensorflow/python/data/experimental/ops/batching.py297classA `Dataset` that batches ragged dense elements into `tf.sparse.SparseTensor`s.
18811879_MapAndBatchDatasettensorflow/tensorflow/python/data/experimental/ops/batching.py327classA `Dataset` that maps a function over a batch of elements.
18821880_DenseToRaggedDatasettensorflow/tensorflow/python/data/experimental/ops/batching.py380classA `Dataset` that encodes dense inputs as ragged (w/ ragged_rank=0). In particular: * Any tf.Tensor elements with rank>0 are encoded as ragged tensors with ragged_rank=0. This allows tensors with varying shape to be batched together. * Any other elements are left as-is.
18831881cardinalitytensorflow/tensorflow/python/data/experimental/ops/cardinality.py38functionReturns the cardinality of `dataset`, if known. The operation returns the cardinality of `dataset`. The operation may return `tf.data.experimental.INFINITE_CARDINALITY` if `dataset` contains an infinite number of elements or `tf.data.experimental.UNKNOWN_CARDINALITY` if the analysis fails to determine the number of elements in `dataset` (e.g. when the dataset source is a file). >>> dataset = tf.data.Dataset.range(42) >>> print(tf.data.experimental.cardinality(dataset).numpy()) 42 >>> dataset = dataset.repeat() >>> cardinality = tf.data.experimental.cardinality(dataset) >>> print((cardinality == tf.data.experimental.INFINITE_CARDINALITY).numpy()) True >>> dataset = dataset.filter(lambda x: True) >>> cardinality = tf.data.experimental.cardinality(dataset) >>> print((cardinality == tf.data.experimental.UNKNOWN_CARDINALITY).numpy()) True Args: dataset: A `tf.data.Dataset` for which to determine cardinality. Returns: A scalar `tf.int64` `Tensor` representing the cardinality of `dataset`. If the cardinality is infinite or unknown, the operation returns the named constant `INFINITE_CARDINALITY` and `UNKNOWN_CARDINALITY` respectively.
18841882assert_cardinalitytensorflow/tensorflow/python/data/experimental/ops/cardinality.py72functionAsserts the cardinality of the input dataset. NOTE: The following assumes that "examples.tfrecord" contains 42 records. >>> dataset = tf.data.TFRecordDataset("examples.tfrecord") >>> cardinality = tf.data.experimental.cardinality(dataset) >>> print((cardinality == tf.data.experimental.UNKNOWN_CARDINALITY).numpy()) True >>> dataset = dataset.apply(tf.data.experimental.assert_cardinality(42)) >>> print(tf.data.experimental.cardinality(dataset).numpy()) 42 Args: expected_cardinality: The expected cardinality of the input dataset. Returns: A `Dataset` transformation function, which can be passed to `tf.data.Dataset.apply`. Raises: FailedPreconditionError: The assertion is checked at runtime (when iterating the dataset) and an error is raised if the actual and expected cardinality differ.
18851883_AssertCardinalityDatasettensorflow/tensorflow/python/data/experimental/ops/cardinality.py103classA `Dataset` that assert the cardinality of its input.
18861884compresstensorflow/tensorflow/python/data/experimental/ops/compression_ops.py24functionCompress a dataset element. Args: element: A nested structure of types supported by Tensorflow. Returns: A variant tensor representing the compressed element. This variant can be passed to `uncompress` to get back the original element.
18871885uncompresstensorflow/tensorflow/python/data/experimental/ops/compression_ops.py39functionUncompress a compressed dataset element. Args: element: A scalar variant tensor to uncompress. The element should have been created by calling `compress`. output_spec: A nested structure of `tf.TypeSpec` representing the type(s) of the uncompressed element. Returns: The uncompressed element.
18881886CounterV2tensorflow/tensorflow/python/data/experimental/ops/counter.py29functionCreates a `Dataset` that counts from `start` in steps of size `step`. For example: ```python Dataset.count() == [0, 1, 2, ...) Dataset.count(2) == [2, 3, ...) Dataset.count(2, 5) == [2, 7, 12, ...) Dataset.count(0, -1) == [0, -1, -2, ...) Dataset.count(10, -1) == [10, 9, ...) ``` Args: start: (Optional.) The starting value for the counter. Defaults to 0. step: (Optional.) The step size for the counter. Defaults to 1. dtype: (Optional.) The data type for counter elements. Defaults to `tf.int64`. Returns: A `Dataset` of scalar `dtype` elements.
18891887CounterV1tensorflow/tensorflow/python/data/experimental/ops/counter.py59function
18901888ProcessingModetensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py36class
18911889_DataServiceDatasetV2tensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py49classA `Dataset` that reads elements from the tf.data service.
18921890_DataServiceDatasetV1tensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py124classA `Dataset` that executes its input through the tf.data service.
18931891_parse_servicetensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py148functionParses a tf.data service string into a (protocol, address) tuple. Args: service: A string in the format "protocol://address". Returns: The parsed (protocol, address) tuple
18941892_from_dataset_idtensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py175functionCreates a dataset which reads data from the tf.data service. This transformation is similar to `from_dataset_id`, but supports additional parameters which we do not yet want to add to the public Python API. Args: processing_mode: A string specifying the policy for how data should be processed by tf.data workers. Currently, the only supported value is "parallel_epochs". service: A string indicating how to connect to the tf.data service. The string should be in the format "<protocol>://<address>", e.g. "grpc://localhost:5000". dataset_id: The id of the dataset to read from. This id is returned by `register_dataset` when the dataset is registered with the tf.data service. element_spec: A nested structure of `tf.TypeSpec`s representing the type of elements produced by the dataset. Use `tf.data.Dataset.element_spec` to see the element spec for a given dataset. job_name: (Optional.) The name of the job. This argument makes it possible for multiple datasets to share the same job. The default behavior is that the dataset creates anonymous, exclusively owned jobs. max_outstanding_requests: (Optional.) A limit on how many elements may be requested at the same time. You can use this option to control the amount of memory used, since `distribute` won't use more than `element_size` * `max_outstanding_requests` of memory. task_refresh_interval_hint_ms: (Optional.) A hint for how often to query the dispatcher for task changes. Returns: A `tf.data.Dataset` which reads from the tf.data service.
18951893_distributetensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py248functionA transformation that moves dataset processing to the tf.data service. This transformation is similar to `distribute`, but supports additional parameters which we do not yet want to add to the public Python API. Args: processing_mode: A string specifying the policy for how data should be processed by tf.data workers. Currently, the only supported value is "parallel_epochs". service: A string indicating how to connect to the tf.data service. The string should be in the format "<protocol>://<address>", e.g. "grpc://localhost:5000". job_name: (Optional.) The name of the job. This argument makes it possible for multiple datasets to share the same job. The default behavior is that the dataset creates anonymous, exclusively owned jobs. max_outstanding_requests: (Optional.) A limit on how many elements may be requested at the same time. You can use this option to control the amount of memory used, since `distribute` won't use more than `element_size` * `max_outstanding_requests` of memory. task_refresh_interval_hint_ms: (Optional.) A hint for how often to query the dispatcher for task changes. Returns: Dataset: A `Dataset` of the elements produced by the data service.
18961894distributetensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py295functionA transformation that moves dataset processing to the tf.data service. When you iterate over a dataset containing the `distribute` transformation, the tf.data service creates a "job" which produces data for the dataset iteration. The `processing_mode` argument controls what data is produced by a tf.data service job. Currently, the only supported mode is "parallel_epochs". processing_mode="parallel_epochs" means that multiple tf.data workers will iterate through the dataset in parallel, each producing all elements of the dataset. For example, if the dataset contains {0, 1, 2}, every tf.data worker used for execution will produce {0, 1, 2}. If there are 3 workers, the job will produce the elements {0, 0, 0, 1, 1, 1, 2, 2, 2} (though not necessarily in that order). To account for this, it is recommended to randomly shuffle your dataset, so that different tf.data workers will iterate through the dataset in different orders. In the future, there will be additional processing modes. For example, a "one_epoch" mode which partitions the dataset across the tf.data workers, so that the consumers see each element of the dataset only once. ``` dataset = tf.data.Dataset.range(5) dataset = dataset.map(lambda x: x*x) dataset = dataset.apply( tf.data.experimental.service.distribute("parallel_epochs", "grpc://dataservice:5000")) dataset = dataset.map(lambda x: x+1) for element in dataset: print(element) # prints { 1, 2, 5, 10, 17 } ``` In the above example, the first two lines (before the call to `distribute`) will be executed on tf.data workers, and the elements provided over RPC. The remaining transformations (after the call to `distribute`) will be executed locally. The `job_name` argument allows jobs to be shared across multiple datasets. Instead of each dataset creating its own job, all datasets with the same `job_name` will consume from the same job. A new job will be created for each iteration of the dataset (with each repetition of `Dataset.repeat` counting as a new iteration). Suppose two training workers (in either a single client or multi-client setup) iterate over the below dataset, and there is a single tf.data worker: ``` range5_dataset = tf.data.Dataset.range(5) dataset = range5_dataset.apply(tf.data.experimental.service.distribute( "parallel_epochs", "grpc://dataservice:5000", job_name="my_job_name")) for iteration in range(3): print(list(dataset)) ``` The elements of each job will be split between the two processes, with elements being consumed by the processes on a first-come first-served basis. One possible result is that process 1 prints ``` [0, 2, 4] [0, 1, 3] [1] ``` and process 2 prints ``` [1, 3] [2, 4] [0, 2, 3, 4] ``` Job names must not be re-used across different training jobs within the lifetime of the tf.data service. In general, the tf.data service is expected to live for the duration of a single training job. To use the tf.data service with multiple training jobs, make sure to use different job names to avoid conflicts. For example, suppose a training job calls `distribute` with `job_name="job"` and reads until end of input. If another independent job connects to the same tf.data service and tries to read from `job_name="job"`, it will immediately receive end of input, without getting any data. **Keras and Distribution Strategies** The dataset produced by the `distribute` transformation can be passed to Keras' `Model.fit` or Distribution Strategy's `tf.distribute.Strategy.experimental_distribute_dataset` like any other `tf.data.Dataset`. We recommend setting a `job_name` on the call to `distribute` so that if there are multiple workers, they read data from the same job. Note that the autosharding normally performed by `experimental_distribute_dataset` will be disabled when setting a `job_name`, since sharing the job already results in splitting data across the workers. When using a shared job, data will be dynamically balanced across workers, so that they reach end of input about the same time. This results in better worker utilization than with autosharding, where each worker processes an independent set of files, and some workers may run out of data earlier than others. Args: processing_mode: A string specifying the policy for how data should be processed by tf.data workers. Currently, the only supported value is "parallel_epochs". service: A string indicating how to connect to the tf.data service. The string should be in the format "protocol://address", e.g. "grpc://localhost:5000". job_name: (Optional.) The name of the job. This argument makes it possible for multiple datasets to share the same job. The default behavior is that the dataset creates anonymous, exclusively owned jobs. max_outstanding_requests: (Optional.) A limit on how many elements may be requested at the same time. You can use this option to control the amount of memory used, since `distribute` won't use more than `element_size` * `max_outstanding_requests` of memory. Returns: Dataset: A `Dataset` of the elements produced by the data service.
18971895register_datasettensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py424functionRegisters a dataset with the tf.data service. `register_dataset` registers a dataset with the tf.data service so that datasets can be created later with `tf.data.experimental.service.from_dataset_id`. This is useful when the dataset is registered by one process, then used in another process. When the same process is both registering and reading from the dataset, it is simpler to use `tf.data.experimental.service.distribute` instead. If the dataset is already registered with the tf.data service, `register_dataset` returns the already-registered dataset's id. >>> dispatcher = tf.data.experimental.service.DispatchServer(port=0) >>> dispatcher_address = dispatcher.target.split("://")[1] >>> worker = tf.data.experimental.service.WorkerServer( ... port=0, dispatcher_address=dispatcher_address) >>> dataset = tf.data.Dataset.range(10) >>> dataset_id = tf.data.experimental.service.register_dataset( ... dispatcher.target, dataset) >>> dataset = tf.data.experimental.service.from_dataset_id( ... processing_mode="parallel_epochs", ... service=dispatcher.target, ... dataset_id=dataset_id, ... element_spec=dataset.element_spec) >>> print(list(dataset.as_numpy_iterator())) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] Args: service: A string indicating how to connect to the tf.data service. The string should be in the format "protocol://address", e.g. "grpc://localhost:5000". dataset: A `tf.data.Dataset` to register with the tf.data service. Returns: A scalar int64 tensor of the registered dataset's id.
18981896from_dataset_idtensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py491functionCreates a dataset which reads data from the tf.data service. This is useful when the dataset is registered by one process, then used in another process. When the same process is both registering and reading from the dataset, it is simpler to use `tf.data.experimental.service.distribute` instead. Before using `from_dataset_id`, the dataset must have been registered with the tf.data service using `tf.data.experimental.service.register_dataset`. `register_dataset` returns a dataset id for the registered dataset. That is the `dataset_id` which should be passed to `from_dataset_id`. The `element_spec` argument indicates the `tf.TypeSpec`s for the elements produced by the dataset. Currently `element_spec` must be explicitly specified, and match the dataset registered under `dataset_id`. `element_spec` defaults to `None` so that in the future we can support automatically discovering the `element_spec` by querying the tf.data service. `tf.data.experimental.service.distribute` is a convenience method which combines `register_dataset` and `from_dataset_id` into a dataset transformation. See the documentation for `tf.data.experimental.service.distribute` for more detail about how `from_dataset_id` works. >>> dispatcher = tf.data.experimental.service.DispatchServer(port=0) >>> dispatcher_address = dispatcher.target.split("://")[1] >>> worker = tf.data.experimental.service.WorkerServer( ... port=0, dispatcher_address=dispatcher_address) >>> dataset = tf.data.Dataset.range(10) >>> dataset_id = tf.data.experimental.service.register_dataset( ... dispatcher.target, dataset) >>> dataset = tf.data.experimental.service.from_dataset_id( ... processing_mode="parallel_epochs", ... service=dispatcher.target, ... dataset_id=dataset_id, ... element_spec=dataset.element_spec) >>> print(list(dataset.as_numpy_iterator())) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] Args: processing_mode: A string specifying the policy for how data should be processed by tf.data workers. Currently, the only supported value is "parallel_epochs". service: A string indicating how to connect to the tf.data service. The string should be in the format "protocol://address", e.g. "grpc://localhost:5000". dataset_id: The id of the dataset to read from. This id is returned by `register_dataset` when the dataset is registered with the tf.data service. element_spec: A nested structure of `tf.TypeSpec`s representing the type of elements produced by the dataset. Use `tf.data.Dataset.element_spec` to see the element spec for a given dataset. job_name: (Optional.) The name of the job. This argument makes it possible for multiple datasets to share the same job. The default behavior is that the dataset creates anonymous, exclusively owned jobs. max_outstanding_requests: (Optional.) A limit on how many elements may be requested at the same time. You can use this option to control the amount of memory used, since `distribute` won't use more than `element_size` * `max_outstanding_requests` of memory. Returns: A `tf.data.Dataset` which reads from the tf.data service.
18991897_AutoShardDatasettensorflow/tensorflow/python/data/experimental/ops/distribute.py34classA `Dataset` that shards the `Dataset` automatically. This dataset takes in an existing dataset and tries to automatically figure out how to shard the dataset in a multi-worker scenario. Currently, it uses Grappler to walk up the dataset graph until it finds a reader dataset (e.g. CSVDataset, TFRecordDataset), then inserts a ShardDataset op before that node so that each worker only sees some files. Args: num_workers: Total number of workers to shard this dataset across. index: The current worker index (out of the total number of workers) this dataset is for. Raises: NotFoundError: If we cannot find a suitable reader dataset to begin automatically sharding the dataset.
19001898_AutoShardDatasetV1tensorflow/tensorflow/python/data/experimental/ops/distribute.py71function
19011899_RebatchDatasettensorflow/tensorflow/python/data/experimental/ops/distribute.py76classA `Dataset` that rebatches elements from its input into new batch sizes. `_RebatchDataset(input_dataset, batch_sizes)` is functionally equivalent to `input_dataset.unbatch().batch(N)`, where the value of N cycles through the `batch_sizes` input list. The elements produced by this dataset have the same rank as the elements of the input dataset. For example: ```python ds = tf.data.Dataset.range(8) ds = ds.batch(4) ds = _RebatchDataset(ds, batch_sizes=[2, 1, 1]) for elem in ds: print(elem) >> [0, 1], [2], [3], [4, 5], [6], [7] ds = tf.data.Dataset.range(16) ds = ds.batch(4) ds = _RebatchDataset(ds, batch_sizes=[6]) for elem in ds: print(elem) >> [0, 1, 2, 3, 4, 5], [6, 7, 8, 9, 10, 11], [12, 13, 14, 15] ```
19021900_LegacyRebatchDatasettensorflow/tensorflow/python/data/experimental/ops/distribute.py209classA `Dataset` that divides its input batches into `num_replicas` sub-batches. For each batch in the input dataset, _LegacyRebatchDataset will produce `num_replicas` smaller batches whose sizes add up to the original batch size. For example: ```python ds = tf.data.Dataset.range(8) ds = ds.batch(4) ds = _LegacyRebatchDataset(ds, num_replicas=3) for elem in ds: print(elem) >> [0, 1], [2, 3], [], [4, 5], [6, 7], [] ```
19031901_RemoteDatasettensorflow/tensorflow/python/data/experimental/ops/distribute.py280classCreates a dataset on a given `device` given a graph def.
19041902replicatetensorflow/tensorflow/python/data/experimental/ops/distribute.py294functionA transformation that replicates `dataset` onto a list of devices. Args: dataset: A `tf.data.Dataset` object. devices: A list of devices to replicate the dataset on. Returns: A dictionary mapping device name to a dataset on that device.
19051903batch_sizes_for_workertensorflow/tensorflow/python/data/experimental/ops/distribute.py328functionDetermines how to rebatch a dataset for the given worker. Given the global batch size, number of workers, number of replicas per worker, and worker index, returns the correct batch sizes for rebatching a dataset on worker `worker_index` of `num_workers`, such that each global step (across all workers and replicas) will consume global_batch_size elements. The returned value should be passed as the `batch_sizes` input parameter to `tf.data.experimental.rebatch()`. The returned batch sizes meet the following constraints: Let G = global_batch_size, W = num_workers, R = num_replicas_per_worker (A) for any worker, len(batch_sizes) = W * R (B) for any worker, sum(batch_sizes) == G (C) for any global step (i.e. R iterations on each worker), the sum of batches consumed by replicas across all workers is G. (D) any two batch sizes of any two replicas differs by at most one. For example, suppose we have G = 7, W = 2, R = 2, and suppose we have two files which each contain 7 elements: ```python # WORKER 0 batch_sizes_0 = batch_sizes_for_worker(global_batch_size=global_batch_size, num_workers=2, num_replicas_per_worker=2, worker_index=0) print(batch_sizes_0) >> [2, 2, 2, 1] dataset_0 = tf.data.Dataset.from_tensor_slices(["file_a", "file_b"]) dataset_0 = dataset_0.shard(num_shards, index=0) dataset_0 = dataset_0.batch(7) dataset_0 = dataset_0.apply(tf.data.experimental.rebatch(batch_sizes_0)) for elem in dataset_0: print(elem) >> [[A0, A1], [A2, A3], [A4, A5], [A6]] # WORKER 1 batch_sizes_1 = batch_sizes_for_worker(global_batch_size=global_batch_size, num_workers=2, num_replicas_per_worker=2, worker_index=1) print(batch_sizes_1) >> [2, 1, 2, 2] dataset_1 = tf.data.Dataset.from_tensor_slices(["file_a", "file_b"]) dataset_1 = dataset_1.shard(num_shards, index=1) dataset_1 = dataset_1.batch(7) dataset_1 = dataset_1.apply(tf.data.experimental.rebatch(batch_sizes_1)) for elem in dataset_1: print(elem) >> [[B0, B1], [B2], [B3, B4], [B5, B6]] ``` The above example will produce the following elements: Step 1: Worker 0 Replica 0: [A0, A1] Worker 0 Replica 1: [A2, A3] Worker 1 Replica 0: [B0, B1] Worker 1 Replica 1: [B2] Total batch size = 7 Step 2: Worker 0 Replica 0: [A4, A5] Worker 0 Replica 1: [A6] Worker 1 Replica 0: [B3, B4] Worker 1 Replica 1: [B5, B6] Total batch size = 7 Args: global_batch_size: A `tf.int64` scalar, representing the global batch size. num_workers: An integer representing the number of workers the dataset will be distributed across. num_replicas_per_worker: An integer representing the number of replicas per worker. All workers are assumed to have the same number of replicas. worker_index: An integer index of the worker to be rebatched. Returns: A `tf.int64` vector, representing the batch sizes to rebatch the dataset into.
19061904compute_batch_sizetensorflow/tensorflow/python/data/experimental/ops/distribute.py436functionAn operation that returns the batch size of the dataset. This op tries to infer the batch size statically by walking up the dataset tree from the final dataset node and returning the batch size of the first batching dataset (such as from .batch() and .padded_batch()) that it encounters. This differs from using the `element_spec` of a dataset in that it does not account for partial batches. This operation may fail if it encounters contradictory batch sizes (for example, if the dataset is created by zipping together two datasets with different batch sizes), if there are no explicit batching transformations, or if there are operations downstream from the batching transformation that may modify its batch size. In these cases, it returns a -1. Args: dataset: A `tf.data.Dataset` object. Returns: A `tf.int64` Tensor representing the batch size of the dataset sans partial batches. If this cannot be inferred statically, the value of this tensor will be -1.
19071905AutoShardPolicytensorflow/tensorflow/python/data/experimental/ops/distribute_options.py27classRepresents the type of auto-sharding we enable. Please see the DistributeOptions.auto_shard_policy documentation for more information on each type of autosharding.
19081906ExternalStatePolicytensorflow/tensorflow/python/data/experimental/ops/distribute_options.py39class
19091907DistributeOptionstensorflow/tensorflow/python/data/experimental/ops/distribute_options.py46classRepresents options for distributed data processing. You can set the distribution options of a dataset through the `experimental_distribute` property of `tf.data.Options`; the property is an instance of `tf.data.experimental.DistributeOptions`. ```python options = tf.data.Options() options.experimental_distribute.auto_shard_policy = AutoShardPolicy.OFF dataset = dataset.with_options(options) ```
19101908enumerate_datasettensorflow/tensorflow/python/data/experimental/ops/enumerate_ops.py26functionA transformation that enumerates the elements of a dataset. It is similar to python's `enumerate`. For example: ```python # NOTE: The following examples use `{ ... }` to represent the # contents of a dataset. a = { 1, 2, 3 } b = { (7, 8), (9, 10) } # The nested structure of the `datasets` argument determines the # structure of elements in the resulting dataset. a.apply(tf.data.experimental.enumerate_dataset(start=5)) => { (5, 1), (6, 2), (7, 3) } b.apply(tf.data.experimental.enumerate_dataset()) => { (0, (7, 8)), (1, (9, 10)) } ``` Args: start: A `tf.int64` scalar `tf.Tensor`, representing the start value for enumeration. Returns: A `Dataset` transformation function, which can be passed to `tf.data.Dataset.apply`.
19111909ignore_errorstensorflow/tensorflow/python/data/experimental/ops/error_ops.py26functionCreates a `Dataset` from another `Dataset` and silently ignores any errors. Use this transformation to produce a dataset that contains the same elements as the input, but silently drops any elements that caused an error. For example: ```python dataset = tf.data.Dataset.from_tensor_slices([1., 2., 0., 4.]) # Computing `tf.debugging.check_numerics(1. / 0.)` will raise an InvalidArgumentError. dataset = dataset.map(lambda x: tf.debugging.check_numerics(1. / x, "error")) # Using `ignore_errors()` will drop the element that causes an error. dataset = dataset.apply(tf.data.experimental.ignore_errors()) # ==> {1., 0.5, 0.2} ``` Returns: A `Dataset` transformation function, which can be passed to `tf.data.Dataset.apply`.
19121910_IgnoreErrorsDatasettensorflow/tensorflow/python/data/experimental/ops/error_ops.py56classA `Dataset` that silently ignores errors when computing its input.
19131911get_single_elementtensorflow/tensorflow/python/data/experimental/ops/get_single_element.py27functionReturns the single element in `dataset` as a nested structure of tensors. This function enables you to use a `tf.data.Dataset` in a stateless "tensor-in tensor-out" expression, without creating an iterator. This can be useful when your preprocessing transformations are expressed as a `Dataset`, and you want to use the transformation at serving time. For example: ```python def preprocessing_fn(input_str): # ... return image, label input_batch = ... # input batch of BATCH_SIZE elements dataset = (tf.data.Dataset.from_tensor_slices(input_batch) .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) .batch(BATCH_SIZE)) image_batch, label_batch = tf.data.experimental.get_single_element(dataset) ``` Args: dataset: A `tf.data.Dataset` object containing a single element. Returns: A nested structure of `tf.Tensor` objects, corresponding to the single element of `dataset`. Raises: TypeError: if `dataset` is not a `tf.data.Dataset` object. InvalidArgumentError (at runtime): if `dataset` does not contain exactly one element.
19141912group_by_reducertensorflow/tensorflow/python/data/experimental/ops/grouping.py38functionA transformation that groups elements and performs a reduction. This transformation maps element of a dataset to a key using `key_func` and groups the elements by key. The `reducer` is used to process each group; its `init_func` is used to initialize state for each group when it is created, the `reduce_func` is used to update the state every time an element is mapped to the matching group, and the `finalize_func` is used to map the final state to an output value. Args: key_func: A function mapping a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar `tf.int64` tensor. reducer: An instance of `Reducer`, which captures the reduction logic using the `init_func`, `reduce_func`, and `finalize_func` functions. Returns: A `Dataset` transformation function, which can be passed to `tf.data.Dataset.apply`.
19151913group_by_windowtensorflow/tensorflow/python/data/experimental/ops/grouping.py68functionA transformation that groups windows of elements by key and reduces them. This transformation maps each consecutive element in a dataset to a key using `key_func` and groups the elements by key. It then applies `reduce_func` to at most `window_size_func(key)` elements matching the same key. All except the final window for each key will contain `window_size_func(key)` elements; the final window may be smaller. You may provide either a constant `window_size` or a window size determined by the key through `window_size_func`. Args: key_func: A function mapping a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar `tf.int64` tensor. reduce_func: A function mapping a key and a dataset of up to `window_size` consecutive elements matching that key to another dataset. window_size: A `tf.int64` scalar `tf.Tensor`, representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size_func`. window_size_func: A function mapping a key to a `tf.int64` scalar `tf.Tensor`, representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size`. Returns: A `Dataset` transformation function, which can be passed to `tf.data.Dataset.apply`. Raises: ValueError: if neither or both of {`window_size`, `window_size_func`} are passed.
19161914bucket_by_sequence_lengthtensorflow/tensorflow/python/data/experimental/ops/grouping.py128functionA transformation that buckets elements in a `Dataset` by length. Elements of the `Dataset` are grouped together by length and then are padded and batched. This is useful for sequence tasks in which the elements have variable length. Grouping together elements that have similar lengths reduces the total fraction of padding in a batch which increases training step efficiency. Args: element_length_func: function from element in `Dataset` to `tf.int32`, determines the length of the element, which will determine the bucket it goes into. bucket_boundaries: `list<int>`, upper length boundaries of the buckets. bucket_batch_sizes: `list<int>`, batch size per bucket. Length should be `len(bucket_boundaries) + 1`. padded_shapes: Nested structure of `tf.TensorShape` to pass to `tf.data.Dataset.padded_batch`. If not provided, will use `dataset.output_shapes`, which will result in variable length dimensions being padded out to the maximum length in each batch. padding_values: Values to pad with, passed to `tf.data.Dataset.padded_batch`. Defaults to padding with 0. pad_to_bucket_boundary: bool, if `False`, will pad dimensions with unknown size to maximum length in batch. If `True`, will pad dimensions with unknown size to bucket boundary minus 1 (i.e., the maximum length in each bucket), and caller must ensure that the source `Dataset` does not contain any elements with length longer than `max(bucket_boundaries)`. no_padding: `bool`, indicates whether to pad the batch features (features need to be either of type `tf.sparse.SparseTensor` or of same shape). drop_remainder: (Optional.) A `tf.bool` scalar `tf.Tensor`, representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. Returns: A `Dataset` transformation function, which can be passed to `tf.data.Dataset.apply`. Raises: ValueError: if `len(bucket_batch_sizes) != len(bucket_boundaries) + 1`.
19171915_GroupByReducerDatasettensorflow/tensorflow/python/data/experimental/ops/grouping.py247classA `Dataset` that groups its input and performs a reduction.
19181916_GroupByWindowDatasettensorflow/tensorflow/python/data/experimental/ops/grouping.py370classA `Dataset` that groups its input and performs a windowed reduction.
19191917Reducertensorflow/tensorflow/python/data/experimental/ops/grouping.py443classA reducer is used for reducing a set of elements. A reducer is represented as a tuple of the three functions: 1) initialization function: key => initial state 2) reduce function: (old state, input) => new state 3) finalization function: state => result
19201918parallel_interleavetensorflow/tensorflow/python/data/experimental/ops/interleave_ops.py43functionA parallel version of the `Dataset.interleave()` transformation. `parallel_interleave()` maps `map_func` across its input to produce nested datasets, and outputs their elements interleaved. Unlike `tf.data.Dataset.interleave`, it gets elements from `cycle_length` nested datasets in parallel, which increases the throughput, especially in the presence of stragglers. Furthermore, the `sloppy` argument can be used to improve performance, by relaxing the requirement that the outputs are produced in a deterministic order, and allowing the implementation to skip over nested datasets whose elements are not readily available when requested. Example usage: ```python # Preprocess 4 files concurrently. filenames = tf.data.Dataset.list_files("/path/to/data/train*.tfrecords") dataset = filenames.apply( tf.data.experimental.parallel_interleave( lambda filename: tf.data.TFRecordDataset(filename), cycle_length=4)) ``` WARNING: If `sloppy` is `True`, the order of produced elements is not deterministic. Args: map_func: A function mapping a nested structure of tensors to a `Dataset`. cycle_length: The number of input `Dataset`s to interleave from in parallel. block_length: The number of consecutive elements to pull from an input `Dataset` before advancing to the next input `Dataset`. sloppy: A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If `sloppy` is `None`, the `tf.data.Options.experimental_deterministic` dataset option (`True` by default) is used to decide whether to enforce a deterministic order. buffer_output_elements: The number of elements each iterator being interleaved should buffer (similar to the `.prefetch()` transformation for each interleaved iterator). prefetch_input_elements: The number of input elements to transform to iterators before they are needed for interleaving. Returns: A `Dataset` transformation function, which can be passed to `tf.data.Dataset.apply`.
19211919_DirectedInterleaveDatasettensorflow/tensorflow/python/data/experimental/ops/interleave_ops.py104classA substitute for `Dataset.interleave()` on a fixed list of datasets.
19221920sample_from_datasets_v2tensorflow/tensorflow/python/data/experimental/ops/interleave_ops.py146functionSamples elements at random from the datasets in `datasets`. Args: datasets: A list of `tf.data.Dataset` objects with compatible structure. weights: (Optional.) A list of `len(datasets)` floating-point values where `weights[i]` represents the probability with which an element should be sampled from `datasets[i]`, or a `tf.data.Dataset` object where each element is such a list. Defaults to a uniform distribution across `datasets`. seed: (Optional.) A `tf.int64` scalar `tf.Tensor`, representing the random seed that will be used to create the distribution. See `tf.random.set_seed` for behavior. Returns: A dataset that interleaves elements from `datasets` at random, according to `weights` if provided, otherwise with uniform probability. Raises: TypeError: If the `datasets` or `weights` arguments have the wrong type. ValueError: If the `weights` argument is specified and does not match the length of the `datasets` element.
19231921sample_from_datasets_v1tensorflow/tensorflow/python/data/experimental/ops/interleave_ops.py230function
19241922choose_from_datasets_v2tensorflow/tensorflow/python/data/experimental/ops/interleave_ops.py237functionCreates a dataset that deterministically chooses elements from `datasets`. For example, given the following datasets: ```python datasets = [tf.data.Dataset.from_tensors("foo").repeat(), tf.data.Dataset.from_tensors("bar").repeat(), tf.data.Dataset.from_tensors("baz").repeat()] # Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`. choice_dataset = tf.data.Dataset.range(3).repeat(3) result = tf.data.experimental.choose_from_datasets(datasets, choice_dataset) ``` The elements of `result` will be: ``` "foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz" ``` Args: datasets: A list of `tf.data.Dataset` objects with compatible structure. choice_dataset: A `tf.data.Dataset` of scalar `tf.int64` tensors between `0` and `len(datasets) - 1`. Returns: A dataset that interleaves elements from `datasets` according to the values of `choice_dataset`. Raises: TypeError: If the `datasets` or `choice_dataset` arguments have the wrong type.
19251923choose_from_datasets_v1tensorflow/tensorflow/python/data/experimental/ops/interleave_ops.py280function
19261924savetensorflow/tensorflow/python/data/experimental/ops/io.py34functionSaves the content of the given dataset. Example usage: >>> import tempfile >>> path = os.path.join(tempfile.gettempdir(), "saved_data") >>> # Save a dataset >>> dataset = tf.data.Dataset.range(2) >>> tf.data.experimental.save(dataset, path) >>> new_dataset = tf.data.experimental.load(path, ... tf.TensorSpec(shape=(), dtype=tf.int64)) >>> for elem in new_dataset: ... print(elem) tf.Tensor(0, shape=(), dtype=int64) tf.Tensor(1, shape=(), dtype=int64) The saved dataset is saved in multiple file "shards". By default, the dataset output is divided to shards in a round-robin fashion but custom sharding can be specified via the `shard_func` function. For example, you can save the dataset to using a single shard as follows: ```python dataset = make_dataset() def custom_shard_func(element): return 0 dataset = tf.data.experimental.save( path="/path/to/data", ..., shard_func=custom_shard_func) ``` NOTE: The directory layout and file format used for saving the dataset is considered an implementation detail and may change. For this reason, datasets saved through `tf.data.experimental.save` should only be consumed through `tf.data.experimental.load`, which is guaranteed to be backwards compatible. Args: dataset: The dataset to save. path: Required. A directory to use for saving the dataset. compression: Optional. The algorithm to use to compress data when writing it. Supported options are `GZIP` and `NONE`. Defaults to `NONE`. shard_func: Optional. A function to control the mapping of dataset elements to file shards. The function is expected to map elements of the input dataset to int64 shard IDs. If present, the function will be traced and executed as graph computation.
19271925_LoadDatasettensorflow/tensorflow/python/data/experimental/ops/io.py107classA dataset that loads previously saved dataset.
19281926loadtensorflow/tensorflow/python/data/experimental/ops/io.py146functionLoads a previously saved dataset. Example usage: >>> import tempfile >>> path = os.path.join(tempfile.gettempdir(), "saved_data") >>> # Save a dataset >>> dataset = tf.data.Dataset.range(2) >>> tf.data.experimental.save(dataset, path) >>> new_dataset = tf.data.experimental.load(path, ... tf.TensorSpec(shape=(), dtype=tf.int64)) >>> for elem in new_dataset: ... print(elem) tf.Tensor(0, shape=(), dtype=int64) tf.Tensor(1, shape=(), dtype=int64) Note that to load a previously saved dataset, you need to specify `element_spec` -- a type signature of the elements of the saved dataset, which can be obtained via `tf.data.Dataset.element_spec`. This requirement exists so that shape inference of the loaded dataset does not need to perform I/O. If the default option of sharding the saved dataset was used, the element order of the saved dataset will be preserved when loading it. The `reader_func` argument can be used to specify a custom order in which elements should be loaded from the individual shards. The `reader_func` is expected to take a single argument -- a dataset of datasets, each containing elements of one of the shards -- and return a dataset of elements. For example, the order of shards can be shuffled when loading them as follows: ```python def custom_reader_func(datasets): datasets = datasets.shuffle(NUM_SHARDS) return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE) dataset = tf.data.experimental.load( path="/path/to/data", ..., reader_func=custom_reader_func) ``` Args: path: Required. A path pointing to a previously saved dataset. element_spec: Required. A nested structure of `tf.TypeSpec` objects matching the structure of an element of the saved dataset and specifying the type of individual element components. compression: Optional. The algorithm to use to decompress the data when reading it. Supported options are `GZIP` and `NONE`. Defaults to `NONE`. reader_func: Optional. A function to control how to read data from shards. If present, the function will be traced and executed as graph computation. Returns: A `tf.data.Dataset` instance.
19291927_convert_external_state_policy_to_enumtensorflow/tensorflow/python/data/experimental/ops/iterator_ops.py32function
19301928make_saveable_from_iteratortensorflow/tensorflow/python/data/experimental/ops/iterator_ops.py49functionReturns a SaveableObject for saving/restoring iterator state using Saver. Args: iterator: Iterator. external_state_policy: A string that identifies how to handle input pipelines that depend on external state. Possible values are 'ignore': The external state is silently ignored. 'warn': The external state is ignored, logging a warning. 'fail': The operation fails upon encountering external state. By default we set it to 'fail'. Returns: A SaveableObject for saving/restoring iterator state using Saver. Raises: ValueError: If iterator does not support checkpointing. ValueError: If `external_state_policy` is not one of 'warn', 'ignore' or 'fail'. For example: ```python with tf.Graph().as_default(): ds = tf.data.Dataset.range(10) iterator = ds.make_initializable_iterator() # Build the iterator SaveableObject. saveable_obj = tf.data.experimental.make_saveable_from_iterator(iterator) # Add the SaveableObject to the SAVEABLE_OBJECTS collection so # it can be automatically saved using Saver. tf.compat.v1.add_to_collection(tf.GraphKeys.SAVEABLE_OBJECTS, saveable_obj) saver = tf.compat.v1.train.Saver() while continue_training: ... Perform training ... if should_save_checkpoint: saver.save() ``` Note: When restoring the iterator, the existing iterator state is completely discarded. This means that any changes you may have made to the Dataset graph will be discarded as well! This includes the new Dataset graph that you may have built during validation. So, while running validation, make sure to run the initializer for the validation input pipeline after restoring the checkpoint. Note: Not all iterators support checkpointing yet. Attempting to save the state of an unsupported iterator will throw an error.
19311929CheckpointInputPipelineHooktensorflow/tensorflow/python/data/experimental/ops/iterator_ops.py106classCheckpoints input pipeline state every N steps or seconds. This hook saves the state of the iterators in the `Graph` so that when training is resumed the input pipeline continues from where it left off. This could potentially avoid overfitting in certain pipelines where the number of training steps per eval are small compared to the dataset size or if the training pipeline is pre-empted. Differences from `CheckpointSaverHook`: 1. Saves only the input pipelines in the "iterators" collection and not the global variables or other saveable objects. 2. Does not write the `GraphDef` and `MetaGraphDef` to the summary. Example of checkpointing the training pipeline: ```python est = tf.estimator.Estimator(model_fn) while True: est.train( train_input_fn, hooks=[tf.data.experimental.CheckpointInputPipelineHook(est)], steps=train_steps_per_eval) # Note: We do not pass the hook here. metrics = est.evaluate(eval_input_fn) if should_stop_the_training(metrics): break ``` This hook should be used if the input pipeline state needs to be saved separate from the model checkpoint. Doing so may be useful for a few reasons: 1. The input pipeline checkpoint may be large, if there are large shuffle or prefetch buffers for instance, and may bloat the checkpoint size. 2. If the input pipeline is shared between training and validation, restoring the checkpoint during validation may override the validation input pipeline. For saving the input pipeline checkpoint alongside the model weights use `tf.data.experimental.make_saveable_from_iterator` directly to create a `SaveableObject` and add to the `SAVEABLE_OBJECTS` collection. Note, however, that you will need to be careful not to restore the training iterator during eval. You can do that by not adding the iterator to the SAVEABLE_OBJECTS collector when building the eval graph.
19321930_CustomSavertensorflow/tensorflow/python/data/experimental/ops/iterator_ops.py297class`Saver` with a different default `latest_filename`. This is used in the `CheckpointInputPipelineHook` to avoid conflicts with the model ckpt saved by the `CheckpointSaverHook`.
19331931map_defuntensorflow/tensorflow/python/data/experimental/ops/map_defun.py26functionMap a function on the list of tensors unpacked from `elems` on dimension 0. Args: fn: A function (`function.defun`) that takes a list of tensors and returns another list of tensors. The output list has the same types as output_dtypes. The elements of the output list have the same dimension 0 as `elems`, and the remaining dimensions correspond to those of `fn_output_shapes`. elems: A list of tensors. output_dtypes: A list of dtypes corresponding to the output types of the function. output_shapes: A list of `TensorShape`s corresponding to the output shapes from each invocation of the function on slices of inputs. max_intra_op_parallelism: An integer. If positive, sets the max parallelism limit of each function call to this. Raises: ValueError: if any of the inputs are malformed. Returns: A list of `Tensor` objects with the same types as `output_dtypes`.
19341932MatchingFilesDatasettensorflow/tensorflow/python/data/experimental/ops/matching_files.py28classA `Dataset` that list the files according to the input patterns.
19351933modeltensorflow/tensorflow/python/data/experimental/ops/optimization.py24functionA transformation that models performance. Returns: A `Dataset` transformation function, which can be passed to `tf.data.Dataset.apply`.
19361934optimizetensorflow/tensorflow/python/data/experimental/ops/optimization.py39functionA transformation that applies optimizations. Args: optimizations: (Optional.) A `tf.string` vector `tf.Tensor` identifying optimizations to use. If not specified, the default set of optimizations is applied. Returns: A `Dataset` transformation function, which can be passed to `tf.data.Dataset.apply`.
19371935_ChooseFastestDatasettensorflow/tensorflow/python/data/experimental/ops/optimization.py59classA `Dataset` that merges two input datasets.
19381936_ChooseFastestBranchDatasettensorflow/tensorflow/python/data/experimental/ops/optimization.py106classA `Dataset` that merges two input datasets.
19391937_AutotuneAlgorithmtensorflow/tensorflow/python/data/experimental/ops/optimization_options.py29classControls what algorithm is used in the autotune implementation.
19401938MapVectorizationOptionstensorflow/tensorflow/python/data/experimental/ops/optimization_options.py36classRepresents options for the MapVectorization optimization.
19411939OptimizationOptionstensorflow/tensorflow/python/data/experimental/ops/optimization_options.py70classRepresents options for dataset optimizations. You can set the optimization options of a dataset through the `experimental_optimization` property of `tf.data.Options`; the property is an instance of `tf.data.experimental.OptimizationOptions`. ```python options = tf.data.Options() options.experimental_optimization.noop_elimination = True options.experimental_optimization.map_vectorization.enabled = True options.experimental_optimization.apply_default_optimizations = False dataset = dataset.with_options(options) ```
19421940_ParseExampleDatasettensorflow/tensorflow/python/data/experimental/ops/parsing_ops.py31classA `Dataset` that parses `example` dataset into a `dict` dataset.
19431941parse_example_datasettensorflow/tensorflow/python/data/experimental/ops/parsing_ops.py110functionA transformation that parses `Example` protos into a `dict` of tensors. Parses a number of serialized `Example` protos given in `serialized`. We refer to `serialized` as a batch with `batch_size` many entries of individual `Example` protos. This op parses serialized examples into a dictionary mapping keys to `Tensor`, `SparseTensor`, and `RaggedTensor` objects. `features` is a dict from keys to `VarLenFeature`, `RaggedFeature`, `SparseFeature`, and `FixedLenFeature` objects. Each `VarLenFeature` and `SparseFeature` is mapped to a `SparseTensor`; each `RaggedFeature` is mapped to a `RaggedTensor`; and each `FixedLenFeature` is mapped to a `Tensor`. See `tf.io.parse_example` for more details about feature dictionaries. Args: features: A `dict` mapping feature keys to `FixedLenFeature`, `VarLenFeature`, `RaggedFeature`, and `SparseFeature` values. num_parallel_calls: (Optional.) A `tf.int32` scalar `tf.Tensor`, representing the number of parsing processes to call in parallel. deterministic: (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order if some parsing calls complete faster than others. If `deterministic` is `None`, the `tf.data.Options.experimental_deterministic` dataset option (`True` by default) is used to decide whether to produce elements deterministically. Returns: A dataset transformation function, which can be passed to `tf.data.Dataset.apply`. Raises: ValueError: if features argument is None.
19441942prefetch_to_devicetensorflow/tensorflow/python/data/experimental/ops/prefetching_ops.py37functionA transformation that prefetches dataset values to the given `device`. NOTE: Although the transformation creates a `tf.data.Dataset`, the transformation must be the final `Dataset` in the input pipeline. Args: device: A string. The name of a device to which elements will be prefetched. buffer_size: (Optional.) The number of elements to buffer on `device`. Defaults to an automatically chosen value. Returns: A `Dataset` transformation function, which can be passed to `tf.data.Dataset.apply`.
19451943copy_to_devicetensorflow/tensorflow/python/data/experimental/ops/prefetching_ops.py60functionA transformation that copies dataset elements to the given `target_device`. Args: target_device: The name of a device to which elements will be copied. source_device: The original device on which `input_dataset` will be placed. Returns: A `Dataset` transformation function, which can be passed to `tf.data.Dataset.apply`.
19461944_CopyToDeviceDatasettensorflow/tensorflow/python/data/experimental/ops/prefetching_ops.py86classA `Dataset` that copies elements to another device.
19471945_MapOnGpuDatasettensorflow/tensorflow/python/data/experimental/ops/prefetching_ops.py228classA `Dataset` that maps a function over elements in its using a GPU.
19481946map_on_gputensorflow/tensorflow/python/data/experimental/ops/prefetching_ops.py260functionMaps `map_func` across the elements of this dataset. NOTE: This is a highly experimental version of `tf.data.Dataset.map` that runs `map_func` on GPU. It must be used after applying the `tf.data.experimental.copy_to_device` transformation with a GPU device argument. Args: map_func: A function mapping a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to another nested structure of tensors. Returns: A `Dataset` transformation function, which can be passed to `tf.data.Dataset.apply`.
19491947RandomDatasetV2tensorflow/tensorflow/python/data/experimental/ops/random_ops.py32classA `Dataset` of pseudorandom values.
19501948RandomDatasetV1tensorflow/tensorflow/python/data/experimental/ops/random_ops.py48classA `Dataset` of pseudorandom values.
19511949_is_valid_int32tensorflow/tensorflow/python/data/experimental/ops/readers.py50function
19521950_is_valid_int64tensorflow/tensorflow/python/data/experimental/ops/readers.py59function
19531951_is_valid_floattensorflow/tensorflow/python/data/experimental/ops/readers.py67function
19541952_infer_typetensorflow/tensorflow/python/data/experimental/ops/readers.py74functionGiven a string, infers its tensor type. Infers the type of a value by picking the least 'permissive' type possible, while still allowing the previous type inference for this column to be valid. Args: str_val: String value to infer the type of. na_value: Additional string to recognize as a NA/NaN CSV value. prev_type: Type previously inferred based on values of this column that we've seen up till now. Returns: Inferred dtype.
19551953_next_csv_rowtensorflow/tensorflow/python/data/experimental/ops/readers.py111functionGenerator that yields rows of CSV file(s) in order.
19561954_infer_column_defaultstensorflow/tensorflow/python/data/experimental/ops/readers.py131functionInfers column types from the first N valid CSV records of files.
19571955_infer_column_namestensorflow/tensorflow/python/data/experimental/ops/readers.py158functionInfers column names from first rows of files.
19581956_get_sorted_col_indicestensorflow/tensorflow/python/data/experimental/ops/readers.py183functionTransforms select_columns argument into sorted column indices.
19591957_maybe_shuffle_and_repeattensorflow/tensorflow/python/data/experimental/ops/readers.py213functionOptionally shuffle and repeat dataset, as requested.
19601958make_tf_record_datasettensorflow/tensorflow/python/data/experimental/ops/readers.py223functionReads and optionally parses TFRecord files into a dataset. Provides common functionality such as batching, optional parsing, shuffling, and performant defaults. Args: file_pattern: List of files or patterns of TFRecord file paths. See `tf.io.gfile.glob` for pattern rules. batch_size: An int representing the number of records to combine in a single batch. parser_fn: (Optional.) A function accepting string input to parse and process the record contents. This function must map records to components of a fixed shape, so they may be batched. By default, uses the record contents unmodified. num_epochs: (Optional.) An int specifying the number of times this dataset is repeated. If None (the default), cycles through the dataset forever. shuffle: (Optional.) A bool that indicates whether the input should be shuffled. Defaults to `True`. shuffle_buffer_size: (Optional.) Buffer size to use for shuffling. A large buffer size ensures better shuffling, but increases memory usage and startup time. shuffle_seed: (Optional.) Randomization seed to use for shuffling. prefetch_buffer_size: (Optional.) An int specifying the number of feature batches to prefetch for performance improvement. Defaults to auto-tune. Set to 0 to disable prefetching. num_parallel_reads: (Optional.) Number of threads used to read records from files. By default or if set to a value >1, the results will be interleaved. Defaults to `24`. num_parallel_parser_calls: (Optional.) Number of parallel records to parse in parallel. Defaults to `batch_size`. drop_final_batch: (Optional.) Whether the last batch should be dropped in case its size is smaller than `batch_size`; the default behavior is not to drop the smaller batch. Returns: A dataset, where each element matches the output of `parser_fn` except it will have an additional leading `batch-size` dimension, or a `batch_size`-length 1-D tensor of strings if `parser_fn` is unspecified.
The file is too large to be shown. View raw