1 | | name | file | line | type | comment |
---|
2 | 0 | UserInputError | tensorflow/configure.py | 74 | class | |
3 | 1 | is_windows | tensorflow/configure.py | 78 | function | |
4 | 2 | is_linux | tensorflow/configure.py | 82 | function | |
5 | 3 | is_macos | tensorflow/configure.py | 86 | function | |
6 | 4 | is_ppc64le | tensorflow/configure.py | 90 | function | |
7 | 5 | is_cygwin | tensorflow/configure.py | 94 | function | |
8 | 6 | get_input | tensorflow/configure.py | 98 | function | |
9 | 7 | symlink_force | tensorflow/configure.py | 109 | function | Force symlink, equivalent of 'ln -sf'.
Args:
target: items to link to.
link_name: name of the link. |
10 | 8 | sed_in_place | tensorflow/configure.py | 126 | function | Replace old string with new string in file.
Args:
filename: string for filename.
old: string to replace.
new: new string to replace to. |
11 | 9 | write_to_bazelrc | tensorflow/configure.py | 141 | function | |
12 | 10 | write_action_env_to_bazelrc | tensorflow/configure.py | 146 | function | |
13 | 11 | run_shell | tensorflow/configure.py | 150 | function | |
14 | 12 | cygpath | tensorflow/configure.py | 163 | function | Convert path from posix to windows. |
15 | 13 | get_python_path | tensorflow/configure.py | 168 | function | Get the python site package paths. |
16 | 14 | get_python_major_version | tensorflow/configure.py | 198 | function | Get the python major version. |
17 | 15 | setup_python | tensorflow/configure.py | 203 | function | Setup python related env variables. |
18 | 16 | reset_tf_configure_bazelrc | tensorflow/configure.py | 273 | function | Reset file that contains customized config settings. |
19 | 17 | cleanup_makefile | tensorflow/configure.py | 278 | function | Delete any leftover BUILD files from the Makefile build.
These files could interfere with Bazel parsing. |
20 | 18 | get_var | tensorflow/configure.py | 292 | function | Get boolean input from user.
If var_name is not set in env, ask user to enable query_item or not. If the
response is empty, use the default.
Args:
environ_cp: copy of the os.environ.
var_name: string for name of environment variable, e.g. "TF_NEED_CUDA".
query_item: string for feature related to the variable, e.g. "CUDA for
Nvidia GPUs".
enabled_by_default: boolean for default behavior.
question: optional string for how to ask for user input.
yes_reply: optional string for reply when feature is enabled.
no_reply: optional string for reply when feature is disabled.
Returns:
boolean value of the variable.
Raises:
UserInputError: if an environment variable is set, but it cannot be
interpreted as a boolean indicator, assume that the user has made a
scripting error, and will continue to provide invalid input.
Raise the error to avoid infinitely looping. |
21 | 19 | set_build_var | tensorflow/configure.py | 377 | function | Set if query_item will be enabled for the build.
Ask user if query_item will be enabled. Default is used if no input is given.
Set subprocess environment variable and write to .bazelrc if enabled.
Args:
environ_cp: copy of the os.environ.
var_name: string for name of environment variable, e.g. "TF_NEED_CUDA".
query_item: string for feature related to the variable, e.g. "CUDA for
Nvidia GPUs".
option_name: string for option to define in .bazelrc.
enabled_by_default: boolean for default behavior.
bazel_config_name: Name for Bazel --config argument to enable build feature. |
22 | 20 | set_action_env_var | tensorflow/configure.py | 411 | function | Set boolean action_env variable.
Ask user if query_item will be enabled. Default is used if no input is given.
Set environment variable and write to .bazelrc.
Args:
environ_cp: copy of the os.environ.
var_name: string for name of environment variable, e.g. "TF_NEED_CUDA".
query_item: string for feature related to the variable, e.g. "CUDA for
Nvidia GPUs".
enabled_by_default: boolean for default behavior.
question: optional string for how to ask for user input.
yes_reply: optional string for reply when feature is enabled.
no_reply: optional string for reply when feature is disabled.
bazel_config_name: adding config to .bazelrc instead of action_env. |
23 | 21 | convert_version_to_int | tensorflow/configure.py | 446 | function | Convert a version number to a integer that can be used to compare.
Version strings of the form X.YZ and X.Y.Z-xxxxx are supported. The
'xxxxx' part, for instance 'homebrew' on OS/X, is ignored.
Args:
version: a version to be converted
Returns:
An integer if converted successfully, otherwise return None. |
24 | 22 | check_bazel_version | tensorflow/configure.py | 471 | function | Check installed bazel version is between min_version and max_version.
Args:
min_version: string for minimum bazel version (must exist!).
max_version: string for maximum bazel version (must exist!).
Returns:
The bazel version detected. |
25 | 23 | set_cc_opt_flags | tensorflow/configure.py | 518 | function | Set up architecture-dependent optimization flags.
Also append CC optimization flags to bazel.rc..
Args:
environ_cp: copy of the os.environ. |
26 | 24 | set_tf_cuda_clang | tensorflow/configure.py | 546 | function | set TF_CUDA_CLANG action_env.
Args:
environ_cp: copy of the os.environ. |
27 | 25 | set_tf_download_clang | tensorflow/configure.py | 566 | function | Set TF_DOWNLOAD_CLANG action_env. |
28 | 26 | get_from_env_or_user_or_default | tensorflow/configure.py | 582 | function | Get var_name either from env, or user or default.
If var_name has been set as environment variable, use the preset value, else
ask for user input. If no input is provided, the default is used.
Args:
environ_cp: copy of the os.environ.
var_name: string for name of environment variable, e.g. "TF_NEED_CUDA".
ask_for_var: string for how to ask for user input.
var_default: default value string.
Returns:
string value for var_name |
29 | 27 | set_clang_cuda_compiler_path | tensorflow/configure.py | 607 | function | Set CLANG_CUDA_COMPILER_PATH. |
30 | 28 | prompt_loop_or_load_from_env | tensorflow/configure.py | 630 | function | Loop over user prompts for an ENV param until receiving a valid response.
For the env param var_name, read from the environment or verify user input
until receiving valid input. When done, set var_name in the environ_cp to its
new value.
Args:
environ_cp: (Dict) copy of the os.environ.
var_name: (String) string for name of environment variable, e.g. "TF_MYVAR".
var_default: (String) default value string.
ask_for_var: (String) string for how to ask for user input.
check_success: (Function) function that takes one argument and returns a
boolean. Should return True if the value provided is considered valid. May
contain a complex error message if error_msg does not provide enough
information. In that case, set suppress_default_error to True.
error_msg: (String) String with one and only one '%s'. Formatted with each
invalid response upon check_success(input) failure.
suppress_default_error: (Bool) Suppress the above error message in favor of
one from the check_success function.
resolve_symlinks: (Bool) Translate symbolic links into the real filepath.
n_ask_attempts: (Integer) Number of times to query for valid input before
raising an error and quitting.
Returns:
[String] The value of var_name after querying for input.
Raises:
UserInputError: if a query has been attempted n_ask_attempts times without
success, assume that the user has made a scripting error, and will
continue to provide invalid input. Raise the error to avoid infinitely
looping. |
31 | 29 | create_android_ndk_rule | tensorflow/configure.py | 696 | function | Set ANDROID_NDK_HOME and write Android NDK WORKSPACE rule. |
32 | 30 | create_android_sdk_rule | tensorflow/configure.py | 724 | function | Set Android variables and write Android SDK WORKSPACE rule. |
33 | 31 | get_ndk_api_level | tensorflow/configure.py | 788 | function | Gets the appropriate NDK API level to use for the provided Android NDK path. |
34 | 32 | set_gcc_host_compiler_path | tensorflow/configure.py | 836 | function | Set GCC_HOST_COMPILER_PATH. |
35 | 33 | reformat_version_sequence | tensorflow/configure.py | 858 | function | Reformat the version string to have the given number of sequences.
For example:
Given (7, 2) -> 7.0
(7.0.1, 2) -> 7.0
(5, 1) -> 5
(5.0.3.2, 1) -> 5
Args:
version_str: String, the version string.
sequence_count: int, an integer.
Returns:
string, reformatted version string. |
36 | 34 | set_tf_cuda_paths | tensorflow/configure.py | 881 | function | Set TF_CUDA_PATHS. |
37 | 35 | set_tf_cuda_version | tensorflow/configure.py | 892 | function | Set TF_CUDA_VERSION. |
38 | 36 | set_tf_cudnn_version | tensorflow/configure.py | 904 | function | Set TF_CUDNN_VERSION. |
39 | 37 | is_cuda_compatible | tensorflow/configure.py | 916 | function | Check compatibility between given library and cudnn/cudart libraries. |
40 | 38 | set_tf_tensorrt_version | tensorflow/configure.py | 945 | function | Set TF_TENSORRT_VERSION. |
41 | 39 | set_tf_nccl_version | tensorflow/configure.py | 962 | function | Set TF_NCCL_VERSION. |
42 | 40 | get_native_cuda_compute_capabilities | tensorflow/configure.py | 979 | function | Get native cuda compute capabilities.
Args:
environ_cp: copy of the os.environ.
Returns:
string of native cuda compute capabilities, separated by comma. |
43 | 41 | set_tf_cuda_compute_capabilities | tensorflow/configure.py | 1003 | function | Set TF_CUDA_COMPUTE_CAPABILITIES. |
44 | 42 | set_other_cuda_vars | tensorflow/configure.py | 1074 | function | Set other CUDA related variables. |
45 | 43 | set_host_cxx_compiler | tensorflow/configure.py | 1083 | function | Set HOST_CXX_COMPILER. |
46 | 44 | set_host_c_compiler | tensorflow/configure.py | 1100 | function | Set HOST_C_COMPILER. |
47 | 45 | set_computecpp_toolkit_path | tensorflow/configure.py | 1117 | function | Set COMPUTECPP_TOOLKIT_PATH. |
48 | 46 | set_trisycl_include_dir | tensorflow/configure.py | 1149 | function | Set TRISYCL_INCLUDE_DIR. |
49 | 47 | system_specific_test_config | tensorflow/configure.py | 1173 | function | Add default build and test flags required for TF tests to bazelrc. |
50 | 48 | set_system_libs_flag | tensorflow/configure.py | 1216 | function | |
51 | 49 | is_reduced_optimize_huge_functions_available | tensorflow/configure.py | 1233 | function | Check to see if the system supports /d2ReducedOptimizeHugeFunctions.
The above compiler flag is a new compiler flag introduced to the Visual Studio
compiler in version 16.4 (available in Visual Studio 2019, Preview edition
only, as of 2019-11-19). TensorFlow needs this flag to massively reduce
compile times, but until 16.4 is officially released, we can't depend on it.
See also
https://groups.google.com/a/tensorflow.org/d/topic/build/SsW98Eo7l3o/discussion
Because it's very annoying to check this manually (to check the MSVC installed
versions, you need to use the registry, and it's not clear if Bazel will be
using that install version anyway), we expect enviroments who know they may
use this flag to export TF_VC_VERSION=16.4
TODO(angerson, gunan): Remove this function when TensorFlow's minimum VS
version is upgraded to 16.4.
Arguments:
environ_cp: Environment of the current execution
Returns:
boolean, whether or not /d2ReducedOptimizeHugeFunctions is available on this
machine. |
52 | 50 | set_windows_build_flags | tensorflow/configure.py | 1262 | function | Set Windows specific build options. |
53 | 51 | config_info_line | tensorflow/configure.py | 1283 | function | Helper function to print formatted help text for Bazel config options. |
54 | 52 | configure_ios | tensorflow/configure.py | 1288 | function | Configures TensorFlow for iOS builds.
This function will only be executed if `is_macos()` is true. |
55 | 53 | validate_cuda_config | tensorflow/configure.py | 1305 | function | Run find_cuda_config.py and return cuda_toolkit_path, or None. |
56 | 54 | main | tensorflow/configure.py | 1365 | function | |
57 | 55 | _running_from_pip_package | tensorflow/tensorflow/api_template.__init__.py | 132 | function | |
58 | 56 | _running_from_pip_package | tensorflow/tensorflow/api_template_v1.__init__.py | 142 | function | |
59 | 57 | _LazyLoader | tensorflow/tensorflow/virtual_root_template_v1.__init__.py | 33 | class | Lazily import a module so that we can forward it. |
60 | 58 | _forward_module | tensorflow/tensorflow/virtual_root_template_v1.__init__.py | 63 | function | |
61 | 59 | _LazyLoader | tensorflow/tensorflow/virtual_root_template_v2.__init__.py | 33 | class | Lazily import a module so that we can forward it. |
62 | 60 | _forward_module | tensorflow/tensorflow/virtual_root_template_v2.__init__.py | 63 | function | |
63 | 61 | VarsAndArithmeticObjectGraph | tensorflow/tensorflow/cc/saved_model/testdata/generate_saved_models.py | 37 | class | Three vars (one in a sub-module) and compute method. |
64 | 62 | ReferencesParent | tensorflow/tensorflow/cc/saved_model/testdata/generate_saved_models.py | 55 | class | |
65 | 63 | CyclicModule | tensorflow/tensorflow/cc/saved_model/testdata/generate_saved_models.py | 64 | class | |
66 | 64 | main | tensorflow/tensorflow/cc/saved_model/testdata/generate_saved_models.py | 77 | function | |
67 | 65 | tfadd | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 48 | function | |
68 | 66 | tfadd_with_ckpt | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 54 | function | |
69 | 67 | tfadd_with_ckpt_saver | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 69 | function | |
70 | 68 | tfassert_eq | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 88 | function | |
71 | 69 | tfcond | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 96 | function | |
72 | 70 | tfgather | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 104 | function | |
73 | 71 | tfmatmul | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 110 | function | |
74 | 72 | tfmatmulandadd | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 116 | function | |
75 | 73 | tffunction | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 124 | function | |
76 | 74 | tfsplits | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 135 | function | A more complex graph, including splits. |
77 | 75 | tftop_k | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 152 | function | |
78 | 76 | tfvariable_readonly | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 158 | function | |
79 | 77 | tfvariable | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 169 | function | |
80 | 78 | tfvariable_sequential_updates | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 177 | function | |
81 | 79 | export_debug_info | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 189 | function | Exports debug information from a graph.
Args:
exported_graph: A Graph that has been created by tracing a saveable view.
Returns:
Corresponding GraphDebugInfo with traces for all ops in exported_graph. |
82 | 80 | write_graph | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 204 | function | Build a graph using build_graph and write it out. |
83 | 81 | main | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 223 | function | |
84 | 82 | _XlaClusterOutputGrad | tensorflow/tensorflow/compiler/jit/ops/xla_ops_grad.py | 25 | function | |
85 | 83 | TestGraphDebugInfo | tensorflow/tensorflow/compiler/mlir/lite/tests/debuginfo/concrete_function_error.py | 32 | class | Test stack trace can be displayed. |
86 | 84 | main | tensorflow/tensorflow/compiler/mlir/lite/tests/debuginfo/concrete_function_error.py | 64 | function | |
87 | 85 | TestModule | tensorflow/tensorflow/compiler/mlir/lite/tests/debuginfo/saved_model_error.py | 32 | class | The test model has unsupported op. |
88 | 86 | TestGraphDebugInfo | tensorflow/tensorflow/compiler/mlir/lite/tests/debuginfo/saved_model_error.py | 41 | class | Test stack trace can be displayed. |
89 | 87 | main | tensorflow/tensorflow/compiler/mlir/lite/tests/debuginfo/saved_model_error.py | 73 | function | test driver method writes the error message to stdout. |
90 | 88 | TestModule | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/basic.py | 38 | class | |
91 | 89 | Test | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/basic_v1.py | 49 | function | |
92 | 90 | TestModule | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/call_to_exported.py | 27 | class | |
93 | 91 | do_test | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/common.py | 43 | function | Runs test.
1. Performs absl and tf "main"-like initialization that must run before almost
anything else.
2. Converts `tf.Module` to SavedModel
3. Converts SavedModel to MLIR
4. Prints the textual MLIR to stdout (it is expected that the caller will have
FileCheck checks in its file to check this output).
This is only for use by the MLIR SavedModel importer tests.
Args:
create_module_fn: A callable taking no arguments, which returns the
`tf.Module` to be converted and printed.
exported_names: A set of exported names for the MLIR converter (default is
"export all").
show_debug_info: If true, shows debug locations in the resulting MLIR. |
94 | 92 | set_tf_options | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/common_v1.py | 38 | function | |
95 | 93 | do_test | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/common_v1.py | 49 | function | Runs test.
1. Performs absl and tf "main"-like initialization that must run before almost
anything else.
2. Converts signature_def_map to SavedModel V1
3. Converts SavedModel V1 to MLIR
4. Prints the textual MLIR to stdout (it is expected that the caller will have
FileCheck checks in its file to check this output).
This is only for use by the MLIR SavedModel importer tests.
Args:
create_signature: A functor that return signature_def_map, init_op and
assets_collection. signature_def_map is a map from string key to
signature_def. The key will be used as function name in the resulting
MLIR.
canonicalize: If true, canonicalizer will be run on the resulting MLIR.
show_debug_info: If true, shows debug locations in the resulting MLIR. |
96 | 94 | Test | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/control_flow_duplicate_v1.py | 42 | function | |
97 | 95 | Test | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/control_flow_upgrade_legacy_v1.py | 34 | function | |
98 | 96 | ReferencesParent | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/cyclic_object_graph.py | 27 | class | |
99 | 97 | TestModule | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/cyclic_object_graph.py | 38 | class | |
100 | 98 | Child | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/dag_object_graph.py | 27 | class | |
101 | 99 | TestModule | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/dag_object_graph.py | 37 | class | |
102 | 100 | TestModule | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/debug_info.py | 27 | class | |
103 | 101 | plus | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/defun_export.py | 29 | function | |
104 | 102 | test_defun | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/defun_export.py | 33 | function | |
105 | 103 | Test | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/duplicate_method_names_v1.py | 37 | function | |
106 | 104 | TestModule | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/exported_python_args.py | 27 | class | |
107 | 105 | write_vocabulary_file | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/hash_table_asset_v1.py | 39 | function | Write temporary vocab file for module construction. |
108 | 106 | test | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/hash_table_asset_v1.py | 49 | function | |
109 | 107 | Test | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/hash_table_v1.py | 60 | function | |
110 | 108 | mnist_model | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/keras.py | 27 | function | Creates a MNIST model. |
111 | 109 | TestModule | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/keras.py | 36 | class | |
112 | 110 | Test | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/multi_arguments_results_v1.py | 52 | function | |
113 | 111 | Test | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/multi_variables_v1.py | 39 | function | |
114 | 112 | TestModule | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/partially_shaped_variables.py | 27 | class | |
115 | 113 | Test | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/remove_init_variable_v1.py | 50 | function | |
116 | 114 | TestModule | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/shapes_for_arguments.py | 27 | class | |
117 | 115 | Test | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/shared_variable_v1.py | 41 | function | |
118 | 116 | TestModule | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/structured_input.py | 27 | class | |
119 | 117 | AdadeltaOptimizerTest | tensorflow/tensorflow/compiler/tests/adadelta_test.py | 31 | class | |
120 | 118 | AdagradDAOptimizerTest | tensorflow/tensorflow/compiler/tests/adagrad_da_test.py | 32 | class | |
121 | 119 | AdagradOptimizerTest | tensorflow/tensorflow/compiler/tests/adagrad_test.py | 31 | class | |
122 | 120 | adam_update_numpy | tensorflow/tensorflow/compiler/tests/adam_test.py | 34 | function | |
123 | 121 | AdamOptimizerTest | tensorflow/tensorflow/compiler/tests/adam_test.py | 52 | class | |
124 | 122 | XlaAddNTest | tensorflow/tensorflow/compiler/tests/add_n_test.py | 30 | class | |
125 | 123 | ArgMinMaxTest | tensorflow/tensorflow/compiler/tests/argminmax_test.py | 30 | class | |
126 | 124 | BinaryOpsTest | tensorflow/tensorflow/compiler/tests/binary_ops_test.py | 39 | class | Test cases for binary operators. |
127 | 125 | BucketizationOpTest | tensorflow/tensorflow/compiler/tests/bucketize_op_test.py | 30 | class | |
128 | 126 | CaseTest | tensorflow/tensorflow/compiler/tests/case_test.py | 31 | class | |
129 | 127 | CategoricalTest | tensorflow/tensorflow/compiler/tests/categorical_op_test.py | 36 | class | Test cases for random-number generating operators. |
130 | 128 | CholeskyOpTest | tensorflow/tensorflow/compiler/tests/cholesky_op_test.py | 35 | class | |
131 | 129 | ClusteringTest | tensorflow/tensorflow/compiler/tests/clustering_test.py | 35 | class | |
132 | 130 | ComplexNumbersDivisionTest | tensorflow/tensorflow/compiler/tests/complex_div_test.py | 35 | class | Test cases for complex numbers division operators. |
133 | 131 | ConcatTest | tensorflow/tensorflow/compiler/tests/concat_ops_test.py | 34 | class | |
134 | 132 | ConcatOffsetTest | tensorflow/tensorflow/compiler/tests/concat_ops_test.py | 335 | class | |
135 | 133 | PackTest | tensorflow/tensorflow/compiler/tests/concat_ops_test.py | 349 | class | |
136 | 134 | CondTest | tensorflow/tensorflow/compiler/tests/cond_test.py | 39 | class | |
137 | 135 | Conv2DTest | tensorflow/tensorflow/compiler/tests/conv2d_test.py | 42 | class | |
138 | 136 | Conv2DBackpropInputTest | tensorflow/tensorflow/compiler/tests/conv2d_test.py | 236 | class | |
139 | 137 | Conv2DBackpropFilterTest | tensorflow/tensorflow/compiler/tests/conv2d_test.py | 534 | class | |
140 | 138 | Conv3DBackpropFilterV2GradTest | tensorflow/tensorflow/compiler/tests/conv3d_test.py | 36 | class | |
141 | 139 | Conv3DTransposeTest | tensorflow/tensorflow/compiler/tests/conv3d_test.py | 69 | class | |
142 | 140 | ConvolutionNodeNameTest | tensorflow/tensorflow/compiler/tests/conv_node_name_test.py | 35 | class | Verify convolution node name match.
Verify convolution node names on TPU and CPU match with dilation > 1. |
143 | 141 | XlaDataFormatDimMapTest | tensorflow/tensorflow/compiler/tests/data_format_ops_test.py | 30 | class | |
144 | 142 | XlaPermuteOpTest | tensorflow/tensorflow/compiler/tests/data_format_ops_test.py | 67 | class | |
145 | 143 | GetRunMetadataLabels | tensorflow/tensorflow/compiler/tests/dense_layer_test.py | 36 | function | Returns all labels in run_metadata. |
146 | 144 | InLabels | tensorflow/tensorflow/compiler/tests/dense_layer_test.py | 45 | function | Returns true iff one of the labels contains substr. |
147 | 145 | DenseLayerTest | tensorflow/tensorflow/compiler/tests/dense_layer_test.py | 50 | class | |
148 | 146 | ReferenceDepthwiseConv2D | tensorflow/tensorflow/compiler/tests/depthwise_conv_op_test.py | 35 | function | |
149 | 147 | ConfigsToTest | tensorflow/tensorflow/compiler/tests/depthwise_conv_op_test.py | 64 | function | Iterator for different convolution shapes, strides and paddings.
Yields:
Tuple (input_size, filter_size, out_size, stride, padding), the depthwise
convolution parameters. |
150 | 148 | ConfigsWithDilationsToTest | tensorflow/tensorflow/compiler/tests/depthwise_conv_op_test.py | 91 | function | Iterator for different convolution shapes, strides and paddings.
Yields:
Tuple (input_size, filter_size, out_size, stride, dilation, padding), the
depthwise
convolution parameters. |
151 | 149 | CheckGradConfigsToTest | tensorflow/tensorflow/compiler/tests/depthwise_conv_op_test.py | 117 | function | Iterator for different convolution shapes, strides and paddings.
compute_gradient_error() is very expensive. So the configs should be
relatively small.
Yields:
Tuple (input_size, filter_size, out_size, stride, padding), the depthwise
convolution parameters. |
152 | 150 | DepthwiseConv2DTest | tensorflow/tensorflow/compiler/tests/depthwise_conv_op_test.py | 144 | class | |
153 | 151 | DynamicUpdateSliceOpsTest | tensorflow/tensorflow/compiler/tests/dynamic_slice_ops_test.py | 30 | class | |
154 | 152 | DynamicStitchTest | tensorflow/tensorflow/compiler/tests/dynamic_stitch_test.py | 30 | class | |
155 | 153 | EagerTest | tensorflow/tensorflow/compiler/tests/eager_test.py | 47 | class | |
156 | 154 | EagerFunctionTest | tensorflow/tensorflow/compiler/tests/eager_test.py | 301 | class | |
157 | 155 | ExcessivePaddingTest | tensorflow/tensorflow/compiler/tests/eager_test.py | 721 | class | Test that eager execution works with TPU flattened tensors.
Tensors that would normally be excessively padded when written
to TPU memory are reshaped to 1-D flat tensors.
This test case verifies that such tensors work with eager execution.
The flattening currently only happens on TPU, but tests should work
fine with all backends as flattening is transparent. |
158 | 156 | multiple_tpus | tensorflow/tensorflow/compiler/tests/eager_test.py | 772 | function | |
159 | 157 | MultiDeviceTest | tensorflow/tensorflow/compiler/tests/eager_test.py | 777 | class | Test running TPU computation on more than one core. |
160 | 158 | EinsumOpTest | tensorflow/tensorflow/compiler/tests/einsum_op_test.py | 30 | class | Test cases for einsum op. |
161 | 159 | EnsureShapeOpTest | tensorflow/tensorflow/compiler/tests/ensure_shape_op_test.py | 29 | class | |
162 | 160 | ExtractImagePatches | tensorflow/tensorflow/compiler/tests/extract_image_patches_op_test.py | 29 | class | Functional tests for ExtractImagePatches op. |
163 | 161 | FakeQuantWithMinMaxArgsTest | tensorflow/tensorflow/compiler/tests/fake_quant_ops_test.py | 27 | class | Test cases for FakeQuantWithMinMaxArgs operation. |
164 | 162 | FakeQuantWithMinMaxArgsGradientTest | tensorflow/tensorflow/compiler/tests/fake_quant_ops_test.py | 125 | class | Test cases for FakeQuantWithMinMaxArgsGradient operation. |
165 | 163 | FakeQuantWithMinMaxVarsTest | tensorflow/tensorflow/compiler/tests/fake_quant_ops_test.py | 226 | class | Test cases for FakeQuantWithMinMaxVars operation. |
166 | 164 | FakeQuantWithMinMaxVarsGradientTest | tensorflow/tensorflow/compiler/tests/fake_quant_ops_test.py | 331 | class | Test cases for FakeQuantWithMinMaxVarsGradient operation. |
167 | 165 | pick_10 | tensorflow/tensorflow/compiler/tests/fft_test.py | 38 | function | |
168 | 166 | to_32bit | tensorflow/tensorflow/compiler/tests/fft_test.py | 45 | function | |
169 | 167 | FFTTest | tensorflow/tensorflow/compiler/tests/fft_test.py | 60 | class | |
170 | 168 | FIFOQueueTest | tensorflow/tensorflow/compiler/tests/fifo_queue_test.py | 31 | class | |
171 | 169 | FtrlOptimizerTest | tensorflow/tensorflow/compiler/tests/ftrl_test.py | 32 | class | |
172 | 170 | FunctionTest | tensorflow/tensorflow/compiler/tests/function_test.py | 31 | class | |
173 | 171 | FusedBatchNormTest | tensorflow/tensorflow/compiler/tests/fused_batchnorm_test.py | 45 | class | |
174 | 172 | GatherNdTest | tensorflow/tensorflow/compiler/tests/gather_nd_op_test.py | 30 | class | |
175 | 173 | GatherTest | tensorflow/tensorflow/compiler/tests/gather_test.py | 34 | class | |
176 | 174 | GatherBenchmark | tensorflow/tensorflow/compiler/tests/gather_test.py | 158 | class | Microbenchmarks for the gather op. |
177 | 175 | _generate_numpy_random_rgb | tensorflow/tensorflow/compiler/tests/image_ops_test.py | 40 | function | |
178 | 176 | RGBToHSVTest | tensorflow/tensorflow/compiler/tests/image_ops_test.py | 47 | class | |
179 | 177 | AdjustContrastTest | tensorflow/tensorflow/compiler/tests/image_ops_test.py | 110 | class | |
180 | 178 | AdjustHueTest | tensorflow/tensorflow/compiler/tests/image_ops_test.py | 174 | class | |
181 | 179 | AdjustSaturationTest | tensorflow/tensorflow/compiler/tests/image_ops_test.py | 309 | class | |
182 | 180 | ResizeNearestNeighborTest | tensorflow/tensorflow/compiler/tests/image_ops_test.py | 409 | class | |
183 | 181 | ResizeBilinearTest | tensorflow/tensorflow/compiler/tests/image_ops_test.py | 548 | class | |
184 | 182 | ResizeBilinearGradTest | tensorflow/tensorflow/compiler/tests/image_ops_test.py | 631 | class | |
185 | 183 | ResizeBilinearNonAlignCornersTest | tensorflow/tensorflow/compiler/tests/image_ops_test.py | 713 | class | |
186 | 184 | NonMaxSuppressionTest | tensorflow/tensorflow/compiler/tests/image_ops_test.py | 776 | class | |
187 | 185 | BatchedNonMaxSuppressionCorrectnessTest | tensorflow/tensorflow/compiler/tests/image_ops_test.py | 985 | class | |
188 | 186 | NoRewriteSessionConfig | tensorflow/tensorflow/compiler/tests/jit_test.py | 46 | function | |
189 | 187 | CompiledKernel | tensorflow/tensorflow/compiler/tests/jit_test.py | 56 | function | Execute 'fn' as a compiled XLA kernel, with 'inputs'. |
190 | 188 | RunMetadataLabels | tensorflow/tensorflow/compiler/tests/jit_test.py | 68 | function | Returns all labels in run_metadata. |
191 | 189 | InLabels | tensorflow/tensorflow/compiler/tests/jit_test.py | 77 | function | Returns true iff one of the labels contains substr. |
192 | 190 | MetadataHasXlaRunOp | tensorflow/tensorflow/compiler/tests/jit_test.py | 82 | function | Returns true if there are XlaRun kernels in run_metadata's timeline. |
193 | 191 | JitLaunchTest | tensorflow/tensorflow/compiler/tests/jit_test.py | 89 | class | |
194 | 192 | XlaCompilationTest | tensorflow/tensorflow/compiler/tests/jit_test.py | 279 | class | Tests for auto-compilation on CPU/GPU devices. |
195 | 193 | ElementWiseFusionTest | tensorflow/tensorflow/compiler/tests/jit_test.py | 480 | class | |
196 | 194 | LazyCompilationTest | tensorflow/tensorflow/compiler/tests/jit_test.py | 520 | class | |
197 | 195 | ListDiffTest | tensorflow/tensorflow/compiler/tests/listdiff_op_test.py | 31 | class | |
198 | 196 | LRNTest | tensorflow/tensorflow/compiler/tests/lrn_ops_test.py | 39 | class | |
199 | 197 | Clip | tensorflow/tensorflow/compiler/tests/lstm.py | 38 | function | Clips x to the range [-1., 1.]. |
200 | 198 | LSTMCellWeightsShape | tensorflow/tensorflow/compiler/tests/lstm.py | 43 | function | Returns the shape of the weights for a single LSTM cell. |
201 | 199 | LSTMCell | tensorflow/tensorflow/compiler/tests/lstm.py | 50 | function | Unrolls a single LSTM cell with clipped activations forward by one step.
Args:
weights: Weight matrix with shape LSTMCellWeightsShape.
m_prev: Previous m states with shape [batch_size, num_nodes].
c_prev: Previous c states with shape [batch_size, num_nodes].
x: Input with shape [batch_size, num_inputs].
pad: Padding with shape [batch_size, 1]. Each padding value is either
0 or 1, where 1 indicates padding; i.e. the input is shorter than the
sequence length, and the (m, c) states should simply be passed through
from the previous states.
Returns:
The next (m, c) states, each with shape [batch_size, num_nodes]. |
202 | 200 | LSTMLayer | tensorflow/tensorflow/compiler/tests/lstm.py | 88 | function | Unrolls a layer of LSTM cells forward by the sequence length.
The sequence length is determined by the length of x_seq and pad_seq, which
must be the same.
Args:
cell_name: Base name of each cell.
weights: Weight matrix with shape LSTMCellWeightsShape.
m: Initial m states with shape [batch_size, num_nodes].
c: Initial c states with shape [batch_size, num_nodes].
x_seq: List of inputs, each with shape [batch_size, num_inputs].
The length of the list is the sequence length.
pad_seq: List of paddings, each with shape [batch_size, 1].
The length of the list is the sequence length.
Each padding value is either 0 or 1, where 1 indicates padding;
i.e. the input is shorter than the sequence length.
Returns:
List of per-sequence-step outputs, each with shape [batch_size, num_nodes].
Raises:
ValueError: If len(x_seq) != len(pad_seq). |
203 | 201 | RandomVar | tensorflow/tensorflow/compiler/tests/lstm.py | 121 | function | Returns a variable of the given shape initialized to random values. |
204 | 202 | RandomInputs | tensorflow/tensorflow/compiler/tests/lstm.py | 127 | function | Returns randomly initialized (x_seq, pad_seq) sequences. |
205 | 203 | BuildLSTMLayer | tensorflow/tensorflow/compiler/tests/lstm.py | 140 | function | Builds a single LSTM layer with random weights and inputs.
Args:
batch_size: Inputs are fed in batches of this size.
seq_length: The sequence length to unroll the LSTM layer.
num_inputs: Dimension of inputs that are fed into each LSTM cell.
num_nodes: The number of nodes in each LSTM cell.
Returns:
(out_seq, weights) pair. The out_seq is a list of per-sequence-step
outputs, each with shape [batch_size, num_nodes]. The weights are a list of
weight variables that may be trained. |
206 | 204 | _DumpGraph | tensorflow/tensorflow/compiler/tests/lstm_test.py | 40 | function | |
207 | 205 | _Sigmoid | tensorflow/tensorflow/compiler/tests/lstm_test.py | 47 | function | |
208 | 206 | _Clip | tensorflow/tensorflow/compiler/tests/lstm_test.py | 51 | function | |
209 | 207 | LSTMTest | tensorflow/tensorflow/compiler/tests/lstm_test.py | 55 | class | |
210 | 208 | LSTMBenchmark | tensorflow/tensorflow/compiler/tests/lstm_test.py | 238 | class | Mcro-benchmarks for a single layer of LSTM cells. |
211 | 209 | ManipOpsTest | tensorflow/tensorflow/compiler/tests/manip_ops_test.py | 30 | class | Test cases for manip ops. |
212 | 210 | MatrixBandPartTest | tensorflow/tensorflow/compiler/tests/matrix_band_part_test.py | 30 | class | |
213 | 211 | zip_to_first_list_length | tensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py | 32 | function | |
214 | 212 | repack_diagonals | tensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py | 40 | function | |
215 | 213 | repack_diagonals_in_tests | tensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py | 77 | function | |
216 | 214 | square_cases | tensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py | 95 | function | |
217 | 215 | tall_cases | tensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py | 173 | function | |
218 | 216 | fat_cases | tensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py | 261 | function | |
219 | 217 | all_tests | tensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py | 329 | function | |
220 | 218 | MatrixDiagTest | tensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py | 333 | class | |
221 | 219 | MatrixSetDiagTest | tensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py | 519 | class | |
222 | 220 | MatrixDiagPartTest | tensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py | 652 | class | |
223 | 221 | InverseOpTest | tensorflow/tensorflow/compiler/tests/matrix_inverse_op_test.py | 31 | class | |
224 | 222 | MatrixSolveOpTest | tensorflow/tensorflow/compiler/tests/matrix_solve_op_test.py | 30 | class | |
225 | 223 | MakePlaceholder | tensorflow/tensorflow/compiler/tests/matrix_triangular_solve_op_test.py | 36 | function | |
226 | 224 | MatrixTriangularSolveOpTest | tensorflow/tensorflow/compiler/tests/matrix_triangular_solve_op_test.py | 40 | class | |
227 | 225 | MomentumOptimizerTest | tensorflow/tensorflow/compiler/tests/momentum_test.py | 33 | class | |
228 | 226 | NAryOpsTest | tensorflow/tensorflow/compiler/tests/nary_ops_test.py | 32 | class | |
229 | 227 | NullaryOpsTest | tensorflow/tensorflow/compiler/tests/nullary_ops_test.py | 29 | class | |
230 | 228 | PlaceholderTest | tensorflow/tensorflow/compiler/tests/placeholder_test.py | 28 | class | |
231 | 229 | _AvgPoolGrad | tensorflow/tensorflow/compiler/tests/pooling_ops_3d_test.py | 35 | function | |
232 | 230 | Pooling3DTest | tensorflow/tensorflow/compiler/tests/pooling_ops_3d_test.py | 45 | class | |
233 | 231 | NHWCToNCHW | tensorflow/tensorflow/compiler/tests/pooling_ops_test.py | 33 | function | Convert the input from NHWC format to NCHW.
Args:
input_tensor: a 4-D tensor, or a 4-element array representing the same.
Returns:
the converted tensor or a shape array |
234 | 232 | NCHWToNHWC | tensorflow/tensorflow/compiler/tests/pooling_ops_test.py | 48 | function | Convert the input from NCHW format to NHWC.
Args:
input_tensor: a 4-D tensor, or a 4-element array representing the same.
Returns:
the converted tensor or a shape array |
235 | 233 | GetTestConfigs | tensorflow/tensorflow/compiler/tests/pooling_ops_test.py | 63 | function | Get all the valid tests configs to run.
Returns:
all the valid test configs |
236 | 234 | PoolingTest | tensorflow/tensorflow/compiler/tests/pooling_ops_test.py | 73 | class | |
237 | 235 | PoolGradTest | tensorflow/tensorflow/compiler/tests/pooling_ops_test.py | 292 | class | |
238 | 236 | ProximalAdagradOptimizerTest | tensorflow/tensorflow/compiler/tests/proximal_adagrad_test.py | 32 | class | |
239 | 237 | ProximalGradientDescentOptimizerTest | tensorflow/tensorflow/compiler/tests/proximal_gradient_descent_test.py | 32 | class | |
240 | 238 | QrOpTest | tensorflow/tensorflow/compiler/tests/qr_op_test.py | 33 | class | |
241 | 239 | QuantizedOpsTest | tensorflow/tensorflow/compiler/tests/quantized_ops_test.py | 36 | class | |
242 | 240 | DequantizedOpsTest | tensorflow/tensorflow/compiler/tests/quantized_ops_test.py | 53 | class | |
243 | 241 | RandomOpsTest | tensorflow/tensorflow/compiler/tests/random_ops_test.py | 34 | class | Test cases for random-number generating operators. |
244 | 242 | ReduceOpsTest | tensorflow/tensorflow/compiler/tests/reduce_ops_test.py | 37 | class | |
245 | 243 | ReduceOpPrecisionTest | tensorflow/tensorflow/compiler/tests/reduce_ops_test.py | 183 | class | |
246 | 244 | ReduceWindowTest | tensorflow/tensorflow/compiler/tests/reduce_window_test.py | 31 | class | Test cases for xla.reduce_window. |
247 | 245 | ReshapeTest | tensorflow/tensorflow/compiler/tests/reshape_op_test.py | 30 | class | |
248 | 246 | ReverseOpsTest | tensorflow/tensorflow/compiler/tests/reverse_ops_test.py | 32 | class | |
249 | 247 | ReverseSequenceTest | tensorflow/tensorflow/compiler/tests/reverse_sequence_op_test.py | 29 | class | |
250 | 248 | RmspropTest | tensorflow/tensorflow/compiler/tests/rmsprop_test.py | 31 | class | |
251 | 249 | numpy_reverse | tensorflow/tensorflow/compiler/tests/scan_ops_test.py | 32 | function | |
252 | 250 | handle_options | tensorflow/tensorflow/compiler/tests/scan_ops_test.py | 43 | function | Adds tf options to numpy scan ops. |
253 | 251 | CumsumTest | tensorflow/tensorflow/compiler/tests/scan_ops_test.py | 72 | class | |
254 | 252 | CumprodTest | tensorflow/tensorflow/compiler/tests/scan_ops_test.py | 150 | class | |
255 | 253 | _AsType | tensorflow/tensorflow/compiler/tests/scatter_nd_op_test.py | 31 | function | |
256 | 254 | _FlatInnerDims | tensorflow/tensorflow/compiler/tests/scatter_nd_op_test.py | 35 | function | |
257 | 255 | _FlatOuterDims | tensorflow/tensorflow/compiler/tests/scatter_nd_op_test.py | 42 | function | |
258 | 256 | _NumpyScatterNd | tensorflow/tensorflow/compiler/tests/scatter_nd_op_test.py | 49 | function | |
259 | 257 | _NumpyUpdate | tensorflow/tensorflow/compiler/tests/scatter_nd_op_test.py | 66 | function | |
260 | 258 | ScatterNdTest | tensorflow/tensorflow/compiler/tests/scatter_nd_op_test.py | 71 | class | |
261 | 259 | ScatterNdTensorTest | tensorflow/tensorflow/compiler/tests/scatter_nd_op_test.py | 193 | class | |
262 | 260 | SearchSorteddOpTest | tensorflow/tensorflow/compiler/tests/searchsorted_op_test.py | 28 | class | |
263 | 261 | SegmentReductionOpsTest | tensorflow/tensorflow/compiler/tests/segment_reduction_ops_test.py | 32 | class | Test cases for segment reduction ops. |
264 | 262 | SelfAdjointEigOpTest | tensorflow/tensorflow/compiler/tests/self_adjoint_eig_op_test.py | 32 | class | |
265 | 263 | SliceTest | tensorflow/tensorflow/compiler/tests/slice_ops_test.py | 29 | class | |
266 | 264 | StridedSliceTest | tensorflow/tensorflow/compiler/tests/slice_ops_test.py | 127 | class | |
267 | 265 | XlaSortOpTest | tensorflow/tensorflow/compiler/tests/sort_ops_test.py | 32 | class | |
268 | 266 | space_to_batch_direct | tensorflow/tensorflow/compiler/tests/spacetobatch_op_test.py | 30 | function | Direct Python implementation of space-to-batch conversion.
This is used for tests only.
Args:
input_array: N-D array
block_shape: 1-D array of shape [num_block_dims].
paddings: 2-D array of shape [num_block_dims, 2].
Returns:
Converted tensor. |
269 | 267 | SpaceToBatchTest | tensorflow/tensorflow/compiler/tests/spacetobatch_op_test.py | 71 | class | Tests input-output pairs for the SpaceToBatch and BatchToSpace ops. |
270 | 268 | SpaceToBatchNDTest | tensorflow/tensorflow/compiler/tests/spacetobatch_op_test.py | 152 | class | Tests input-output pairs for the SpaceToBatchND and BatchToSpaceND ops. |
271 | 269 | _SparseToDense | tensorflow/tensorflow/compiler/tests/sparse_to_dense_op_test.py | 31 | function | |
272 | 270 | SparseToDenseTest | tensorflow/tensorflow/compiler/tests/sparse_to_dense_op_test.py | 46 | class | |
273 | 271 | _igamma | tensorflow/tensorflow/compiler/tests/special_math_test.py | 48 | function | |
274 | 272 | _igammac | tensorflow/tensorflow/compiler/tests/special_math_test.py | 53 | function | |
275 | 273 | implicit_reparameterization_grad | tensorflow/tensorflow/compiler/tests/special_math_test.py | 58 | function | |
276 | 274 | _log1p | tensorflow/tensorflow/compiler/tests/special_math_test.py | 65 | function | |
277 | 275 | Log1pTest | tensorflow/tensorflow/compiler/tests/special_math_test.py | 69 | class | |
278 | 276 | IgammaTest | tensorflow/tensorflow/compiler/tests/special_math_test.py | 139 | class | |
279 | 277 | IgammacTest | tensorflow/tensorflow/compiler/tests/special_math_test.py | 324 | class | |
280 | 278 | StackOpTest | tensorflow/tensorflow/compiler/tests/stack_ops_test.py | 32 | class | |
281 | 279 | xla_device | tensorflow/tensorflow/compiler/tests/stateful_random_ops_test.py | 41 | function | |
282 | 280 | xla_device_name | tensorflow/tensorflow/compiler/tests/stateful_random_ops_test.py | 55 | function | |
283 | 281 | StatefulRandomOpsTest | tensorflow/tensorflow/compiler/tests/stateful_random_ops_test.py | 64 | class | Test cases for stateful random-number generator operators. |
284 | 282 | StatelessRandomOpsTest | tensorflow/tensorflow/compiler/tests/stateless_random_ops_test.py | 33 | class | Test cases for stateless random-number generator operators. |
285 | 283 | StatelessRandomOpsBenchmark | tensorflow/tensorflow/compiler/tests/stateless_random_ops_test.py | 136 | class | Microbenchmarks for the stateless random ops. |
286 | 284 | SvdOpTest | tensorflow/tensorflow/compiler/tests/svd_op_test.py | 33 | class | |
287 | 285 | _make_converter | tensorflow/tensorflow/compiler/tests/tensor_array_ops_test.py | 42 | function | |
288 | 286 | TensorArrayTest | tensorflow/tensorflow/compiler/tests/tensor_array_ops_test.py | 53 | class | |
289 | 287 | ListOpsTest | tensorflow/tensorflow/compiler/tests/tensor_list_ops_test.py | 34 | class | |
290 | 288 | TernaryOpsTest | tensorflow/tensorflow/compiler/tests/ternary_ops_test.py | 34 | class | |
291 | 289 | ConvertBetweenDataFormats | tensorflow/tensorflow/compiler/tests/test_utils.py | 26 | function | Converts 4D tensor between data formats. |
292 | 290 | PermuteDimsBetweenDataFormats | tensorflow/tensorflow/compiler/tests/test_utils.py | 47 | function | Get new shape for converting between data formats. |
293 | 291 | RunWithWarmup | tensorflow/tensorflow/compiler/tests/test_utils.py | 71 | function | Runs a graph a few times to ensure that its clusters are compiled. |
294 | 292 | _tfconst | tensorflow/tensorflow/compiler/tests/tridiagonal_solve_ops_test.py | 39 | function | |
295 | 293 | _tf_ones | tensorflow/tensorflow/compiler/tests/tridiagonal_solve_ops_test.py | 43 | function | |
296 | 294 | TridiagonalSolveOpsTest | tensorflow/tensorflow/compiler/tests/tridiagonal_solve_ops_test.py | 47 | class | Test for tri-diagonal matrix related ops. |
297 | 295 | nhwc_to_format | tensorflow/tensorflow/compiler/tests/unary_ops_test.py | 37 | function | Converts a numpy array from NHWC format to `data_format`. |
298 | 296 | UnaryOpsTest | tensorflow/tensorflow/compiler/tests/unary_ops_test.py | 48 | class | Test cases for unary operators. |
299 | 297 | UnstackOpTest | tensorflow/tensorflow/compiler/tests/unstack_test.py | 29 | class | |
300 | 298 | VariableOpsTest | tensorflow/tensorflow/compiler/tests/variable_ops_test.py | 40 | class | Test cases for resource variable operators. |
301 | 299 | StridedSliceAssignChecker | tensorflow/tensorflow/compiler/tests/variable_ops_test.py | 422 | class | Compares the results of a slice assignment using Tensorflow and numpy. |
302 | 300 | SliceAssignTest | tensorflow/tensorflow/compiler/tests/variable_ops_test.py | 451 | class | |
303 | 301 | WhileTest | tensorflow/tensorflow/compiler/tests/while_test.py | 39 | class | |
304 | 302 | is_compile_on_demand | tensorflow/tensorflow/compiler/tests/while_test.py | 260 | function | |
305 | 303 | XlaDeviceGpuTest | tensorflow/tensorflow/compiler/tests/xla_device_gpu_test.py | 28 | class | |
306 | 304 | XlaDeviceTest | tensorflow/tensorflow/compiler/tests/xla_device_test.py | 32 | class | |
307 | 305 | XlaOpsNumericalTest | tensorflow/tensorflow/compiler/tests/xla_ops_test.py | 37 | class | |
308 | 306 | XlaOpsShapeInferenceTest | tensorflow/tensorflow/compiler/tests/xla_ops_test.py | 366 | class | |
309 | 307 | parse_disabled_manifest | tensorflow/tensorflow/compiler/tests/xla_test.py | 55 | function | |
310 | 308 | XLATestCase | tensorflow/tensorflow/compiler/tests/xla_test.py | 81 | class | XLA test cases are parameterized test cases. |
311 | 309 | Benchmark | tensorflow/tensorflow/compiler/tests/xla_test.py | 250 | function | Build a graph and run benchmarks against it, with or without XLA.
Args:
tf_bench: An instance of tf.test.Benchmark, used to run the benchmark.
builder_fn: A function that builds a graph when invoked, and returns
(name, fetches), where name is the name of the test, and fetches
is a list of tensors to fetch as output.
use_xla_jit: If true compile with the XLA JIT, otherwise use regular TF.
device: The tensorflow device to run on, e.g. "cpu", "gpu".
separate_compiled_gradients: If true put each gradient subgraph into a
separate compilation scope. This gives fine-grained control over which
portions of the graph will be compiled as a single unit. Compiling
gradients separately may yield better performance for some graphs.
The scope is named based on the scope of the forward computation as well
as the name of the gradients. As a result, the gradients will be compiled
in a scope that is separate from both the forward computation, and from
other gradients. |
312 | 310 | XlaTestCaseTestCase | tensorflow/tensorflow/compiler/tests/xla_test_test.py | 25 | class | |
313 | 311 | _unary_op | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 70 | function | Wrapper that restricts `fn` to have the correct signature. |
314 | 312 | _broadcasting_binary_op | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 119 | function | Wraps a binary Tensorflow operator and performs XLA-style broadcasting. |
315 | 313 | _shift_right_logical_helper | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 152 | function | Performs an integer right logical shift irrespective of input type. |
316 | 314 | _shift_right_arithmetic_helper | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 167 | function | Performs an integer right arithmetic shift irrespective of input type. |
317 | 315 | _binary_op | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 211 | function | Wrapper that restricts `fn` to have the correct signature. |
318 | 316 | broadcast | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 226 | function | |
319 | 317 | clamp | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 234 | function | |
320 | 318 | conv | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 241 | function | Wraps the XLA ConvGeneralDilated operator.
ConvGeneralDilated is the most general form of XLA convolution and is
documented at
https://www.tensorflow.org/performance/xla/operation_semantics#conv_convolution
Args:
lhs: the input tensor
rhs: the kernel tensor
window_strides: the inter-window strides
padding: the padding to apply at the start and end of each input dimensions
lhs_dilation: dilation to apply between input elements
rhs_dilation: dilation to apply between kernel elements
dimension_numbers: a `ConvolutionDimensionNumbers` proto.
feature_group_count: number of feature groups for grouped convolution.
precision_config: a `xla.PrecisionConfig` proto.
name: an optional name for the operator
Returns:
A tensor representing the output of the convolution. |
321 | 319 | dot | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 291 | function | |
322 | 320 | dot_general | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 295 | function | |
323 | 321 | self_adjoint_eig | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 307 | function | |
324 | 322 | svd | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 311 | function | |
325 | 323 | random_normal | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 327 | function | |
326 | 324 | random_uniform | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 333 | function | |
327 | 325 | reduce_window | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 343 | function | Wraps the XLA ReduceWindow operator.
ReduceWindow is documented at
https://www.tensorflow.org/performance/xla/operation_semantics#reducewindow .
Args:
operand: the input tensor
init: a scalar tensor representing the initial value for the reduction
reducer: a reduction function that combines a pair of scalars.
window_dimensions: shape of the window, as a list of integers
window_strides: inter-window strides, as a list of integers. Optional; if
omitted, defaults to strides of 1.
padding: padding to apply to 'operand'. List of (low, high) pairs of
integers that specify the padding to apply before and after each
dimension. Optional; if omitted, defaults to no padding.
name: the operator name, or None.
Returns:
A tensor that represents the output of the reduce_window operator. |
328 | 326 | reshape | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 391 | function | |
329 | 327 | select | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 398 | function | |
330 | 328 | slice | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 406 | function | |
331 | 329 | _sharding_grad | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 418 | function | |
332 | 330 | _spmd_full_to_shard_shape_grad | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 431 | function | |
333 | 331 | _spmd_shard_to_full_shape_grad | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 440 | function | |
334 | 332 | gather | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 452 | function | |
335 | 333 | scatter | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 463 | function | |
336 | 334 | Sharding | tensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py | 28 | class | A class to support adding sharding attributes to Ops.
Use the factory constructors and then call apply_to_tensor:
Sharding.replicate().apply_to_tensor(tensor) |
337 | 335 | replicate | tensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py | 179 | function | |
338 | 336 | assign_device | tensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py | 188 | function | Returns a tensor that has AssignDevice sharding attribute. |
339 | 337 | tile | tensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py | 202 | function | Returns a tensor that has tiled sharding.
Args:
tensor: A tf.Tensor to shard.
tile_assignment: An np.ndarray describing the topology of the tiling and
which device will compute which part of the topology.
assign_tuple_sharding: If the sharding type should be a tuple.
use_sharding_op: If true, adds a sharding op to set the sharding. |
340 | 338 | split | tensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py | 224 | function | Returns a tensor that is split along the given dimension.
Args:
tensor: A tf.Tensor to split.
split_dimension: The dimension to split.
num_devices: The number of devices to partition the dimension.
assign_tuple_sharding: If the sharding type should be a tuple.
use_sharding_op: If true, adds a sharding op to set the sharding.
input_shape: The full shape of the input tensor. |
341 | 339 | get_op_sharding | tensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py | 248 | function | Returns sharding attribute of an op.
Args:
op: a TensorFlow op.
Returns:
The attribute representing XLA sharding on this op. |
342 | 340 | auto_to_manual_spmd_partition | tensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py | 260 | function | Switches from automatic SPMD partitioning to manual partitioning.
Converts a full-shaped tensor (to be automatically partitioned by SPMD
partitioner) to a shard-shaped tensor to be consumed by manually partitioned
ops.
Args:
tensor: A tf.Tensor in full shape.
manual_sharding: a serialized string of OpSharding to be used in manual
partitioning.
Returns:
A shard-shaped tensor to be consumed by manually partitioned ops. |
343 | 341 | manual_to_auto_spmd_partition | tensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py | 279 | function | Switches from manual partitioning to automatic SPMD partitioning.
Converts a shard-shaped tensor (manually partitioned in SPMD-style) to a
full-shaped tensor to be partitioned automatically by the SPMD partitioner.
Args:
tensor: A tf.Tensor in shard shape.
manual_sharding: a serialized string of OpSharding to be used in manual
partitioning.
full_shape: the shape of tensor before partitioning.
Returns:
A full-shaped tensor to be partitioned automatically by the SPMD
partitioner. |
344 | 342 | numpy_assert_allclose | tensorflow/tensorflow/compiler/xla/python/bfloat16_test.py | 35 | function | |
345 | 343 | Bfloat16Test | tensorflow/tensorflow/compiler/xla/python/bfloat16_test.py | 53 | class | Tests the non-numpy Python methods of the bfloat16 type. |
346 | 344 | Bfloat16NumPyTest | tensorflow/tensorflow/compiler/xla/python/bfloat16_test.py | 251 | class | Tests the NumPy integration of the bfloat16 type. |
347 | 345 | _interpreter_backend_factory | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 58 | function | |
348 | 346 | _cpu_backend_factory | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 62 | function | |
349 | 347 | _gpu_backend_factory | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 66 | function | Returns a GPU backend. BFC allocator is used by default. |
350 | 348 | register_local_backend_factory | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 101 | function | |
351 | 349 | _get_local_backends | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 108 | function | Instantiates all known local backends. |
352 | 350 | get_local_backend | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 131 | function | Returns a local backend.
Args:
name: the backend name. If `None`, a default local backend is returned,
typically `gpu` if one is present, or `cpu` if not. If a string, the named
backend is returned or an exception raised.
Returns:
A LocalBackend object. |
353 | 351 | OpMetadata | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 152 | class | Python representation of a xla.OpMetadata protobuf. |
354 | 352 | CurrentSourceInfoMetadata | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 163 | function | Helper for use in source mapping that returns an OpMetadata object. |
355 | 353 | dtype_to_etype | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 206 | function | Convenience function for reading DTYPE_TO_XLA_ELEMENT_TYPE. |
356 | 354 | shape_from_pyval | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 272 | function | Returns a Shape that describes a tuple-tree of Numpy arrays. |
357 | 355 | execute_with_python_values | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 334 | function | Execute on one replica with Python values as arguments and output. |
358 | 356 | execute_with_python_values_replicated | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 345 | function | Execute on many replicas with Python values as arguments and output.
Arguments:
executable: the program to run.
arguments: a list of lists of Python values indexed by `[replica][arg_num]`
to pass as inputs.
backend: the backend we are targeting.
Returns:
A list of python values, one per replica. |
359 | 357 | PaddingType | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 374 | class | |
360 | 358 | window_padding_type_to_pad_values | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 379 | function | Maps PaddingType or string to pad values (list of pairs of ints). |
361 | 359 | register_custom_call_target | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 418 | function | Registers a custom call target.
Args:
name: bytes containing the name of the function.
fn: a PyCapsule object containing the function pointer.
platform: the target platform. |
362 | 360 | PaddingConfigDimension | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 433 | class | Python representation of a xla.PaddingConfigDimension protobuf. |
363 | 361 | PaddingConfig | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 443 | class | Python representation of a xla.PaddingConfig protobuf. |
364 | 362 | make_padding_config | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 451 | function | Create PaddingConfig proto from list of triples of integers.
Args:
padding_config: either a PaddingConfig or a list of integer triples
(edge_padding_low, edge_padding_high, interior_padding) representing the
configuration of the padding operation.
Returns:
A `PaddingConfig` object. |
365 | 363 | DotDimensionNumbers | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 476 | class | Python representation of a xla.DotDimensionNumbers protobuf. |
366 | 364 | make_dot_dimension_numbers | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 488 | function | Builds a DotDimensionNumbers object from a specification.
Args:
dimension_numbers: either a `DotDimensionNumbers` or a nested tuple
`((lhs_contract, rhs_contract), (lhs_batch, rhs_batch))` of lists of
integers representing the dimensions to treat as contracting dimensions
and batch dimensions on each input operand.
Returns:
A `DotDimensionNumbers` object. |
367 | 365 | ConvolutionDimensionNumbers | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 516 | class | Python representation of a xla.ConvolutionDimensionNumbers protobuf. |
368 | 366 | make_convolution_dimension_numbers | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 536 | function | Builds a ConvolutionDimensionNumbers object from a specification.
Args:
dimension_numbers: optional, either a ConvolutionDimensionNumbers object or
a tuple (lhs_spec, rhs_spec, out_spec). Each element is a string of
length N+2 identifying by position: (1) batch dimensions in lhs, rhs, and
the output with the character 'N', (2) feature dimensions in lhs and the
output with the character 'C', (3) input and output feature dimensions
in rhs with the characters 'I' and 'O' respectively, and (4) spatial
dimension correspondences between lhs, rhs, and the output using any
distinct characters. For example, to indicate dimension numbers
consistent with the Conv operation with two spatial dimensions, one
could use ('NCHW', 'OIHW', 'NCHW'). As another example, to indicate
dimension numbers consistent with the TensorFlow Conv2D operation, one
could use ('NHWC', 'HWIO', 'NHWC'). When using the latter form of
convolution dimension specification, window strides are associated with
spatial dimension character labels according to the order in which the
labels appear in the rhs_spec string, so that window_strides[0] is
matched with the dimension corresponding to the first character
appearing in rhs_spec that is not 'I' or 'O'. By default, use the same
dimension numbering as Conv and ConvWithGeneralPadding.
num_spatial_dimensions: the number of spatial dimensions.
Returns:
A `ConvolutionDimensionNumbers` object. |
369 | 367 | OpSharding | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 600 | class | Python representation of a xla.OpSharding protobuf. |
370 | 368 | PrecisionConfig | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 614 | class | Python representation of a xla.PrecisionConfig protobuf. |
371 | 369 | GatherDimensionNumbers | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 624 | class | Python representation of a xla.GatherDimensionNumbers protobuf. |
372 | 370 | ScatterDimensionNumbers | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 636 | class | Python representation of a xla.ScatterDimensionNumbers protobuf. |
373 | 371 | ReplicaGroup | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 648 | class | Python representation of a xla.ReplicaGroup protobuf. |
374 | 372 | _make_replica_group_proto | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 656 | function | |
375 | 373 | make_replica_groups | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 662 | function | |
376 | 374 | tracebacks | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 677 | function | Context manager that enables or disables traceback collection. |
377 | 375 | heap_profile | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 687 | function | Returns a gzipped pprof protocol buffer containing a heap profile. |
378 | 376 | TestFactory | tensorflow/tensorflow/compiler/xla/python/xla_client_test.py | 56 | function | |
379 | 377 | InstantiateTests | tensorflow/tensorflow/compiler/xla/python/xla_client_test.py | 2103 | function | |
380 | 378 | TpuBackend | tensorflow/tensorflow/compiler/xla/python/tpu_driver/client/tpu_client.py | 29 | class | XLA backend implemented using the Tpu driver API. |
381 | 379 | ConvertLiteralToNumpyArray | tensorflow/tensorflow/compiler/xla/python_api/xla_literal.py | 28 | function | Converts a XLA literal to a Numpy array. |
382 | 380 | _ConvertNumpyArrayToLiteral | tensorflow/tensorflow/compiler/xla/python_api/xla_literal.py | 64 | function | Converts a Numpy array to a XLA literal. |
383 | 381 | ConvertNumpyArrayToLiteral | tensorflow/tensorflow/compiler/xla/python_api/xla_literal.py | 85 | function | Converts a Numpy array or a nested tuple thereof to an XLA literal. |
384 | 382 | Shape | tensorflow/tensorflow/compiler/xla/python_api/xla_shape.py | 29 | class | Wraps a xla_data_pb2.ShapeProto message with a convenient Python type.
Provides direct access to the underlying xla_data_pb2.ShapeProto message in
the
message attribute, along with accessor wrappers to the message's fields.
Avoid direct access to .message unless interacting directly with protobuf APIs
like CopyFrom. In other words, prefer hauling the shape around in a Shape, and
only access .message when strictly required by the protobuf API. |
385 | 383 | _CreateShapeFromNumpy | tensorflow/tensorflow/compiler/xla/python_api/xla_shape.py | 103 | function | Create a Shape from a given Numpy array.
Args:
ndarray: Numpy array.
Returns:
A Shape object. |
386 | 384 | CreateShapeFromNumpy | tensorflow/tensorflow/compiler/xla/python_api/xla_shape.py | 129 | function | Create a Shape from a Numpy array or a nested tuple structure thereof.
Args:
value: Numpy array or (possibly nested) tuple structure that bottoms out in
Numpy arrays.
Returns:
A Shape object. |
387 | 385 | CreateShapeFromDtypeAndTuple | tensorflow/tensorflow/compiler/xla/python_api/xla_shape.py | 147 | function | Create a shape from a Numpy dtype and a sequence of nonnegative integers.
Args:
dtype: a numpy dtype, e.g. np.dtype('int32').
shape_tuple: a sequence of nonnegative integers.
Returns:
A Shape object. |
388 | 386 | RamFilesystemTest | tensorflow/tensorflow/core/platform/ram_file_system_test.py | 38 | class | |
389 | 387 | AddOneTest | tensorflow/tensorflow/examples/adding_an_op/cuda_op_test.py | 25 | class | |
390 | 388 | FactTest | tensorflow/tensorflow/examples/adding_an_op/fact_test.py | 25 | class | |
391 | 389 | ZeroOut1Test | tensorflow/tensorflow/examples/adding_an_op/zero_out_1_test.py | 29 | class | |
392 | 390 | ZeroOut2Test | tensorflow/tensorflow/examples/adding_an_op/zero_out_2_test.py | 30 | class | |
393 | 391 | ZeroOut3Test | tensorflow/tensorflow/examples/adding_an_op/zero_out_3_test.py | 27 | class | |
394 | 392 | _zero_out_grad | tensorflow/tensorflow/examples/adding_an_op/zero_out_grad_2.py | 28 | function | The gradients for `zero_out`.
Args:
op: The `zero_out` `Operation` that we are differentiating, which we can use
to find the inputs and outputs of the original op.
grad: Gradient with respect to the output of the `zero_out` op.
Returns:
Gradients with respect to the input of `zero_out`. |
395 | 393 | load_graph | tensorflow/tensorflow/examples/label_image/label_image.py | 26 | function | |
396 | 394 | read_tensor_from_image_file | tensorflow/tensorflow/examples/label_image/label_image.py | 38 | function | |
397 | 395 | load_labels | tensorflow/tensorflow/examples/label_image/label_image.py | 65 | function | |
398 | 396 | main | tensorflow/tensorflow/examples/saved_model/integration_tests/deploy_mnist_cnn.py | 47 | function | |
399 | 397 | MaybeDistributionScope | tensorflow/tensorflow/examples/saved_model/integration_tests/distribution_strategy_utils.py | 48 | class | Provides a context allowing no distribution strategy. |
400 | 398 | make_feature_extractor | tensorflow/tensorflow/examples/saved_model/integration_tests/export_mnist_cnn.py | 56 | function | Returns a Keras Model to compute a feature vector from MNIST images. |
401 | 399 | set_feature_extractor_hparams | tensorflow/tensorflow/examples/saved_model/integration_tests/export_mnist_cnn.py | 72 | function | |
402 | 400 | make_classifier | tensorflow/tensorflow/examples/saved_model/integration_tests/export_mnist_cnn.py | 76 | function | Returns a Keras Model to classify MNIST using feature_extractor. |
403 | 401 | wrap_keras_model_for_export | tensorflow/tensorflow/examples/saved_model/integration_tests/export_mnist_cnn.py | 87 | function | Wraps `model` for saving and loading as SavedModel. |
404 | 402 | _get_traced_loss | tensorflow/tensorflow/examples/saved_model/integration_tests/export_mnist_cnn.py | 144 | function | Returns tf.function for model.losses[i] with a trace for zero args.
The intended usage is
[_get_traced_loss(model, i) for i in range(len(model.losses))]
This is better than
[tf.function(lambda: model.losses[i], input_signature=[]) for i ...]
because it avoids capturing a loop index in a lambda, and removes any
chance of deferring the trace.
Args:
model: a Keras Model.
i: an integer between from 0 up to but to len(model.losses). |
405 | 403 | main | tensorflow/tensorflow/examples/saved_model/integration_tests/export_mnist_cnn.py | 163 | function | |
406 | 404 | main | tensorflow/tensorflow/examples/saved_model/integration_tests/export_rnn_cell.py | 32 | function | |
407 | 405 | write_vocabulary_file | tensorflow/tensorflow/examples/saved_model/integration_tests/export_simple_text_embedding.py | 34 | function | Write temporary vocab file for module construction. |
408 | 406 | TextEmbeddingModel | tensorflow/tensorflow/examples/saved_model/integration_tests/export_simple_text_embedding.py | 44 | class | Text embedding model.
A text embeddings model that takes a sentences on input and outputs the
sentence embedding. |
409 | 407 | main | tensorflow/tensorflow/examples/saved_model/integration_tests/export_simple_text_embedding.py | 96 | function | |
410 | 408 | TextRnnModel | tensorflow/tensorflow/examples/saved_model/integration_tests/export_text_rnn_model.py | 31 | class | Text RNN model.
A full generative text RNN model that can train and decode sentences from a
starting word. |
411 | 409 | main | tensorflow/tensorflow/examples/saved_model/integration_tests/export_text_rnn_model.py | 170 | function | |
412 | 410 | TestCase | tensorflow/tensorflow/examples/saved_model/integration_tests/integration_scripts.py | 42 | class | Base class to write SavedModel integration tests. |
413 | 411 | MaybeRunScriptInstead | tensorflow/tensorflow/examples/saved_model/integration_tests/integration_scripts.py | 62 | function | |
414 | 412 | _load_random_data | tensorflow/tensorflow/examples/saved_model/integration_tests/mnist_util.py | 28 | function | |
415 | 413 | load_reshaped_data | tensorflow/tensorflow/examples/saved_model/integration_tests/mnist_util.py | 34 | function | Returns MNIST or Fashion MNIST or fake train and test data. |
416 | 414 | _prepare_image | tensorflow/tensorflow/examples/saved_model/integration_tests/mnist_util.py | 44 | function | Converts images to [n,h,w,c] format in range [0,1]. |
417 | 415 | _prepare_label | tensorflow/tensorflow/examples/saved_model/integration_tests/mnist_util.py | 49 | function | Conerts labels to one-hot encoding. |
418 | 416 | SavedModelTest | tensorflow/tensorflow/examples/saved_model/integration_tests/saved_model_test.py | 32 | class | |
419 | 417 | make_feature_extractor | tensorflow/tensorflow/examples/saved_model/integration_tests/use_mnist_cnn.py | 72 | function | Load a pre-trained feature extractor and wrap it for use in Keras. |
420 | 418 | make_classifier | tensorflow/tensorflow/examples/saved_model/integration_tests/use_mnist_cnn.py | 100 | function | Returns a Keras Model to classify MNIST using feature_extractor. |
421 | 419 | main | tensorflow/tensorflow/examples/saved_model/integration_tests/use_mnist_cnn.py | 112 | function | |
422 | 420 | train | tensorflow/tensorflow/examples/saved_model/integration_tests/use_model_in_sequential_keras.py | 35 | function | Build a Keras model and train with mock data. |
423 | 421 | main | tensorflow/tensorflow/examples/saved_model/integration_tests/use_model_in_sequential_keras.py | 67 | function | |
424 | 422 | main | tensorflow/tensorflow/examples/saved_model/integration_tests/use_rnn_cell.py | 33 | function | |
425 | 423 | train | tensorflow/tensorflow/examples/saved_model/integration_tests/use_text_embedding_in_dataset.py | 34 | function | Build a Keras model and train with mock data. |
426 | 424 | main | tensorflow/tensorflow/examples/saved_model/integration_tests/use_text_embedding_in_dataset.py | 65 | function | |
427 | 425 | main | tensorflow/tensorflow/examples/saved_model/integration_tests/use_text_rnn_model.py | 32 | function | |
428 | 426 | StreamingAccuracyStats | tensorflow/tensorflow/examples/speech_commands/accuracy_utils.py | 24 | class | Get streaming accuracy statistics every time a new command is founded.
Attributes:
_how_many_gt: How many ground truths.
_how_many_gt_matched: How many ground truths have been matched.
_how_many_fp: How many commands have been fired as false positive.
_how_many_c: How many commands have been fired correctly.
_how_many_w: How many commands have been fired wrongly.
_gt_occurrence: A list to record which commands and when it occurs in the
input audio stream.
_previous_c: A variable to record the last status of _how_many_c.
_previous_w: A variable to record the last status of _how_many_w.
_previous_fp: A variable to record the last status of _how_many_fp. |
429 | 427 | create_inference_graph | tensorflow/tensorflow/examples/speech_commands/freeze.py | 63 | function | Creates an audio model with the nodes needed for inference.
Uses the supplied arguments to create a model, and inserts the input and
output nodes that are needed to use the graph for inference.
Args:
wanted_words: Comma-separated list of the words we're trying to recognize.
sample_rate: How many samples per second are in the input audio files.
clip_duration_ms: How many samples to analyze for the audio pattern.
clip_stride_ms: How often to run recognition. Useful for models with cache.
window_size_ms: Time slice duration to estimate frequencies from.
window_stride_ms: How far apart time slices should be.
feature_bin_count: Number of frequency bands to analyze.
model_architecture: Name of the kind of model to generate.
preprocess: How the spectrogram is processed to produce features, for
example 'mfcc', 'average', or 'micro'.
Returns:
Input and output tensor objects.
Raises:
Exception: If the preprocessing mode isn't recognized. |
430 | 428 | save_graph_def | tensorflow/tensorflow/examples/speech_commands/freeze.py | 161 | function | Writes a graph def file out to disk.
Args:
file_name: Where to save the file.
frozen_graph_def: GraphDef proto object to save. |
431 | 429 | save_saved_model | tensorflow/tensorflow/examples/speech_commands/freeze.py | 176 | function | Writes a SavedModel out to disk.
Args:
file_name: Where to save the file.
sess: TensorFlow session containing the graph.
input_tensor: Tensor object defining the input's properties.
output_tensor: Tensor object defining the output's properties. |
432 | 430 | main | tensorflow/tensorflow/examples/speech_commands/freeze.py | 211 | function | |
433 | 431 | FreezeTest | tensorflow/tensorflow/examples/speech_commands/freeze_test.py | 30 | class | |
434 | 432 | mix_in_audio_sample | tensorflow/tensorflow/examples/speech_commands/generate_streaming_test_wav.py | 55 | function | Mixes the sample data into the main track at the specified offset.
Args:
track_data: Numpy array holding main audio data. Modified in-place.
track_offset: Where to mix the sample into the main track.
sample_data: Numpy array of audio data to mix into the main track.
sample_offset: Where to start in the audio sample.
clip_duration: How long the sample segment is.
sample_volume: Loudness to mix the sample in at.
ramp_in: Length in samples of volume increase stage.
ramp_out: Length in samples of volume decrease stage. |
435 | 433 | main | tensorflow/tensorflow/examples/speech_commands/generate_streaming_test_wav.py | 86 | function | |
436 | 434 | GenerateStreamingTestWavTest | tensorflow/tensorflow/examples/speech_commands/generate_streaming_test_wav_test.py | 27 | class | |
437 | 435 | prepare_words_list | tensorflow/tensorflow/examples/speech_commands/input_data.py | 58 | function | Prepends common tokens to the custom word list.
Args:
wanted_words: List of strings containing the custom words.
Returns:
List with the standard silence and unknown tokens added. |
438 | 436 | which_set | tensorflow/tensorflow/examples/speech_commands/input_data.py | 70 | function | Determines which data partition the file should belong to.
We want to keep files in the same training, validation, or testing sets even
if new ones are added over time. This makes it less likely that testing
samples will accidentally be reused in training when long runs are restarted
for example. To keep this stability, a hash of the filename is taken and used
to determine which set it should belong to. This determination only depends on
the name and the set proportions, so it won't change as other files are added.
It's also useful to associate particular files as related (for example words
spoken by the same person), so anything after '_nohash_' in a filename is
ignored for set determination. This ensures that 'bobby_nohash_0.wav' and
'bobby_nohash_1.wav' are always in the same set, for example.
Args:
filename: File path of the data sample.
validation_percentage: How much of the data set to use for validation.
testing_percentage: How much of the data set to use for testing.
Returns:
String, one of 'training', 'validation', or 'testing'. |
439 | 437 | load_wav_file | tensorflow/tensorflow/examples/speech_commands/input_data.py | 118 | function | Loads an audio file and returns a float PCM-encoded array of samples.
Args:
filename: Path to the .wav file to load.
Returns:
Numpy array holding the sample data as floats between -1.0 and 1.0. |
440 | 438 | save_wav_file | tensorflow/tensorflow/examples/speech_commands/input_data.py | 136 | function | Saves audio sample data to a .wav audio file.
Args:
filename: Path to save the file to.
wav_data: 2D array of float PCM-encoded audio data.
sample_rate: Samples per second to encode in the file. |
441 | 439 | get_features_range | tensorflow/tensorflow/examples/speech_commands/input_data.py | 160 | function | Returns the expected min/max for generated features.
Args:
model_settings: Information about the current model being trained.
Returns:
Min/max float pair holding the range of features.
Raises:
Exception: If preprocessing mode isn't recognized. |
442 | 440 | AudioProcessor | tensorflow/tensorflow/examples/speech_commands/input_data.py | 190 | class | Handles loading, partitioning, and preparing audio training data. |
443 | 441 | InputDataTest | tensorflow/tensorflow/examples/speech_commands/input_data_test.py | 33 | class | |
444 | 442 | load_graph | tensorflow/tensorflow/examples/speech_commands/label_wav.py | 43 | function | Unpersists graph from file as default graph. |
445 | 443 | load_labels | tensorflow/tensorflow/examples/speech_commands/label_wav.py | 51 | function | Read in labels, one label per line. |
446 | 444 | run_graph | tensorflow/tensorflow/examples/speech_commands/label_wav.py | 56 | function | Runs the audio data through the graph and prints predictions. |
447 | 445 | label_wav | tensorflow/tensorflow/examples/speech_commands/label_wav.py | 77 | function | Loads the model and labels, and runs the inference to print predictions. |
448 | 446 | main | tensorflow/tensorflow/examples/speech_commands/label_wav.py | 98 | function | Entry point for script, converts flags to arguments. |
449 | 447 | load_graph | tensorflow/tensorflow/examples/speech_commands/label_wav_dir.py | 44 | function | Unpersists graph from file as default graph. |
450 | 448 | load_labels | tensorflow/tensorflow/examples/speech_commands/label_wav_dir.py | 52 | function | Read in labels, one label per line. |
451 | 449 | run_graph | tensorflow/tensorflow/examples/speech_commands/label_wav_dir.py | 57 | function | Runs the audio data through the graph and prints predictions. |
452 | 450 | label_wav | tensorflow/tensorflow/examples/speech_commands/label_wav_dir.py | 85 | function | Loads the model and labels, and runs the inference to print predictions. |
453 | 451 | main | tensorflow/tensorflow/examples/speech_commands/label_wav_dir.py | 101 | function | Entry point for script, converts flags to arguments. |
454 | 452 | LabelWavTest | tensorflow/tensorflow/examples/speech_commands/label_wav_test.py | 29 | class | |
455 | 453 | _next_power_of_two | tensorflow/tensorflow/examples/speech_commands/models.py | 27 | function | Calculates the smallest enclosing power of two for an input.
Args:
x: Positive float or integer number.
Returns:
Next largest power of two integer. |
456 | 454 | prepare_model_settings | tensorflow/tensorflow/examples/speech_commands/models.py | 39 | function | Calculates common settings needed for all models.
Args:
label_count: How many classes are to be recognized.
sample_rate: Number of audio samples per second.
clip_duration_ms: Length of each audio clip to be analyzed.
window_size_ms: Duration of frequency analysis window.
window_stride_ms: How far to move in time between frequency windows.
feature_bin_count: Number of frequency bins to use for analysis.
preprocess: How the spectrogram is processed to produce features.
Returns:
Dictionary containing common settings.
Raises:
ValueError: If the preprocessing mode isn't recognized. |
457 | 455 | create_model | tensorflow/tensorflow/examples/speech_commands/models.py | 95 | function | Builds a model of the requested architecture compatible with the settings.
There are many possible ways of deriving predictions from a spectrogram
input, so this function provides an abstract interface for creating different
kinds of models in a black-box way. You need to pass in a TensorFlow node as
the 'fingerprint' input, and this should output a batch of 1D features that
describe the audio. Typically this will be derived from a spectrogram that's
been run through an MFCC, but in theory it can be any feature vector of the
size specified in model_settings['fingerprint_size'].
The function will build the graph it needs in the current TensorFlow graph,
and return the tensorflow output that will contain the 'logits' input to the
softmax prediction process. If training flag is on, it will also return a
placeholder node that can be used to control the dropout amount.
See the implementations below for the possible model architectures that can be
requested.
Args:
fingerprint_input: TensorFlow node that will output audio feature vectors.
model_settings: Dictionary of information about the model.
model_architecture: String specifying which kind of model to create.
is_training: Whether the model is going to be used for training.
runtime_settings: Dictionary of information about the runtime.
Returns:
TensorFlow node outputting logits results, and optionally a dropout
placeholder.
Raises:
Exception: If the architecture type isn't recognized. |
458 | 456 | load_variables_from_checkpoint | tensorflow/tensorflow/examples/speech_commands/models.py | 153 | function | Utility function to centralize checkpoint restoration.
Args:
sess: TensorFlow session.
start_checkpoint: Path to saved checkpoint on disk. |
459 | 457 | create_single_fc_model | tensorflow/tensorflow/examples/speech_commands/models.py | 164 | function | Builds a model with a single hidden fully-connected layer.
This is a very simple model with just one matmul and bias layer. As you'd
expect, it doesn't produce very accurate results, but it is very fast and
simple, so it's useful for sanity testing.
Here's the layout of the graph:
(fingerprint_input)
v
[MatMul]<-(weights)
v
[BiasAdd]<-(bias)
v
Args:
fingerprint_input: TensorFlow node that will output audio feature vectors.
model_settings: Dictionary of information about the model.
is_training: Whether the model is going to be used for training.
Returns:
TensorFlow node outputting logits results, and optionally a dropout
placeholder. |
460 | 458 | create_conv_model | tensorflow/tensorflow/examples/speech_commands/models.py | 207 | function | Builds a standard convolutional model.
This is roughly the network labeled as 'cnn-trad-fpool3' in the
'Convolutional Neural Networks for Small-footprint Keyword Spotting' paper:
http://www.isca-speech.org/archive/interspeech_2015/papers/i15_1478.pdf
Here's the layout of the graph:
(fingerprint_input)
v
[Conv2D]<-(weights)
v
[BiasAdd]<-(bias)
v
[Relu]
v
[MaxPool]
v
[Conv2D]<-(weights)
v
[BiasAdd]<-(bias)
v
[Relu]
v
[MaxPool]
v
[MatMul]<-(weights)
v
[BiasAdd]<-(bias)
v
This produces fairly good quality results, but can involve a large number of
weight parameters and computations. For a cheaper alternative from the same
paper with slightly less accuracy, see 'low_latency_conv' below.
During training, dropout nodes are introduced after each relu, controlled by a
placeholder.
Args:
fingerprint_input: TensorFlow node that will output audio feature vectors.
model_settings: Dictionary of information about the model.
is_training: Whether the model is going to be used for training.
Returns:
TensorFlow node outputting logits results, and optionally a dropout
placeholder. |
461 | 459 | create_low_latency_conv_model | tensorflow/tensorflow/examples/speech_commands/models.py | 333 | function | Builds a convolutional model with low compute requirements.
This is roughly the network labeled as 'cnn-one-fstride4' in the
'Convolutional Neural Networks for Small-footprint Keyword Spotting' paper:
http://www.isca-speech.org/archive/interspeech_2015/papers/i15_1478.pdf
Here's the layout of the graph:
(fingerprint_input)
v
[Conv2D]<-(weights)
v
[BiasAdd]<-(bias)
v
[Relu]
v
[MatMul]<-(weights)
v
[BiasAdd]<-(bias)
v
[MatMul]<-(weights)
v
[BiasAdd]<-(bias)
v
[MatMul]<-(weights)
v
[BiasAdd]<-(bias)
v
This produces slightly lower quality results than the 'conv' model, but needs
fewer weight parameters and computations.
During training, dropout nodes are introduced after the relu, controlled by a
placeholder.
Args:
fingerprint_input: TensorFlow node that will output audio feature vectors.
model_settings: Dictionary of information about the model.
is_training: Whether the model is going to be used for training.
Returns:
TensorFlow node outputting logits results, and optionally a dropout
placeholder. |
462 | 460 | create_low_latency_svdf_model | tensorflow/tensorflow/examples/speech_commands/models.py | 462 | function | Builds an SVDF model with low compute requirements.
This is based in the topology presented in the 'Compressing Deep Neural
Networks using a Rank-Constrained Topology' paper:
https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43813.pdf
Here's the layout of the graph:
(fingerprint_input)
v
[SVDF]<-(weights)
v
[BiasAdd]<-(bias)
v
[Relu]
v
[MatMul]<-(weights)
v
[BiasAdd]<-(bias)
v
[MatMul]<-(weights)
v
[BiasAdd]<-(bias)
v
[MatMul]<-(weights)
v
[BiasAdd]<-(bias)
v
This model produces lower recognition accuracy than the 'conv' model above,
but requires fewer weight parameters and, significantly fewer computations.
During training, dropout nodes are introduced after the relu, controlled by a
placeholder.
Args:
fingerprint_input: TensorFlow node that will output audio feature vectors.
The node is expected to produce a 2D Tensor of shape:
[batch, model_settings['fingerprint_width'] *
model_settings['spectrogram_length']]
with the features corresponding to the same time slot arranged contiguously,
and the oldest slot at index [:, 0], and newest at [:, -1].
model_settings: Dictionary of information about the model.
is_training: Whether the model is going to be used for training.
runtime_settings: Dictionary of information about the runtime.
Returns:
TensorFlow node outputting logits results, and optionally a dropout
placeholder.
Raises:
ValueError: If the inputs tensor is incorrectly shaped. |
463 | 461 | create_tiny_conv_model | tensorflow/tensorflow/examples/speech_commands/models.py | 673 | function | Builds a convolutional model aimed at microcontrollers.
Devices like DSPs and microcontrollers can have very small amounts of
memory and limited processing power. This model is designed to use less
than 20KB of working RAM, and fit within 32KB of read-only (flash) memory.
Here's the layout of the graph:
(fingerprint_input)
v
[Conv2D]<-(weights)
v
[BiasAdd]<-(bias)
v
[Relu]
v
[MatMul]<-(weights)
v
[BiasAdd]<-(bias)
v
This doesn't produce particularly accurate results, but it's designed to be
used as the first stage of a pipeline, running on a low-energy piece of
hardware that can always be on, and then wake higher-power chips when a
possible utterance has been found, so that more accurate analysis can be done.
During training, a dropout node is introduced after the relu, controlled by a
placeholder.
Args:
fingerprint_input: TensorFlow node that will output audio feature vectors.
model_settings: Dictionary of information about the model.
is_training: Whether the model is going to be used for training.
Returns:
TensorFlow node outputting logits results, and optionally a dropout
placeholder. |
464 | 462 | create_tiny_embedding_conv_model | tensorflow/tensorflow/examples/speech_commands/models.py | 765 | function | Builds a convolutional model aimed at microcontrollers.
Devices like DSPs and microcontrollers can have very small amounts of
memory and limited processing power. This model is designed to use less
than 20KB of working RAM, and fit within 32KB of read-only (flash) memory.
Here's the layout of the graph:
(fingerprint_input)
v
[Conv2D]<-(weights)
v
[BiasAdd]<-(bias)
v
[Relu]
v
[Conv2D]<-(weights)
v
[BiasAdd]<-(bias)
v
[Relu]
v
[Conv2D]<-(weights)
v
[BiasAdd]<-(bias)
v
[Relu]
v
[MatMul]<-(weights)
v
[BiasAdd]<-(bias)
v
This doesn't produce particularly accurate results, but it's designed to be
used as the first stage of a pipeline, running on a low-energy piece of
hardware that can always be on, and then wake higher-power chips when a
possible utterance has been found, so that more accurate analysis can be done.
During training, a dropout node is introduced after the relu, controlled by a
placeholder.
Args:
fingerprint_input: TensorFlow node that will output audio feature vectors.
model_settings: Dictionary of information about the model.
is_training: Whether the model is going to be used for training.
Returns:
TensorFlow node outputting logits results, and optionally a dropout
placeholder. |
465 | 463 | ModelsTest | tensorflow/tensorflow/examples/speech_commands/models_test.py | 28 | class | |
466 | 464 | RecognizeResult | tensorflow/tensorflow/examples/speech_commands/recognize_commands.py | 25 | class | Save recognition result temporarily.
Attributes:
founded_command: A string indicating the word just founded. Default value
is '_silence_'
score: An float representing the confidence of founded word. Default
value is zero.
is_new_command: A boolean indicating if the founded command is a new one
against the last one. Default value is False. |
467 | 465 | RecognizeCommands | tensorflow/tensorflow/examples/speech_commands/recognize_commands.py | 67 | class | Smooth the inference results by using average window.
Maintain a slide window over the audio stream, which adds new result(a pair of
the 1.confidences of all classes and 2.the start timestamp of input audio
clip) directly the inference produces one and removes the most previous one
and other abnormal values. Then it smooth the results in the window to get
the most reliable command in this period.
Attributes:
_label: A list containing commands at corresponding lines.
_average_window_duration: The length of average window.
_detection_threshold: A confidence threshold for filtering out unreliable
command.
_suppression_ms: Milliseconds every two reliable founded commands should
apart.
_minimum_count: An integer count indicating the minimum results the average
window should cover.
_previous_results: A deque to store previous results.
_label_count: The length of label list.
_previous_top_label: Last founded command. Initial value is '_silence_'.
_previous_top_time: The timestamp of _previous results. Default is -np.inf. |
468 | 466 | load_graph | tensorflow/tensorflow/examples/speech_commands/test_streaming_accuracy.py | 80 | function | Read a tensorflow model, and creates a default graph object. |
469 | 467 | read_label_file | tensorflow/tensorflow/examples/speech_commands/test_streaming_accuracy.py | 92 | function | Load a list of label. |
470 | 468 | read_wav_file | tensorflow/tensorflow/examples/speech_commands/test_streaming_accuracy.py | 101 | function | Load a wav file and return sample_rate and numpy data of float64 type. |
471 | 469 | main | tensorflow/tensorflow/examples/speech_commands/test_streaming_accuracy.py | 111 | function | |
472 | 470 | main | tensorflow/tensorflow/examples/speech_commands/train.py | 88 | function | |
473 | 471 | verbosity_arg | tensorflow/tensorflow/examples/speech_commands/train.py | 480 | function | Parses verbosity argument.
Args:
value: A member of tf.logging.
Raises:
ArgumentTypeError: Not an expected value. |
474 | 472 | requires_contrib | tensorflow/tensorflow/examples/speech_commands/train_test.py | 32 | function | |
475 | 473 | DictStruct | tensorflow/tensorflow/examples/speech_commands/train_test.py | 44 | class | |
476 | 474 | TrainTest | tensorflow/tensorflow/examples/speech_commands/train_test.py | 50 | class | |
477 | 475 | wav_to_features | tensorflow/tensorflow/examples/speech_commands/wav_to_features.py | 47 | function | Converts an audio file into its corresponding feature map.
Args:
sample_rate: Expected sample rate of the wavs.
clip_duration_ms: Expected duration in milliseconds of the wavs.
window_size_ms: How long each spectrogram timeslice is.
window_stride_ms: How far to move in time between spectrogram timeslices.
feature_bin_count: How many bins to use for the feature fingerprint.
quantize: Whether to train the model for eight-bit deployment.
preprocess: Spectrogram processing mode; "mfcc", "average" or "micro".
input_wav: Path to the audio WAV file to read.
output_c_file: Where to save the generated C source file. |
478 | 476 | main | tensorflow/tensorflow/examples/speech_commands/wav_to_features.py | 125 | function | |
479 | 477 | WavToFeaturesTest | tensorflow/tensorflow/examples/speech_commands/wav_to_features_test.py | 30 | class | |
480 | 478 | create_model | tensorflow/tensorflow/examples/tf2_showcase/mnist.py | 69 | function | Model to recognize digits in the MNIST dataset.
Network structure is equivalent to:
https://github.com/tensorflow/tensorflow/blob/r1.5/tensorflow/examples/tutorials/mnist/mnist_deep.py
and
https://github.com/tensorflow/models/blob/master/tutorials/image/mnist/convolutional.py
But uses the tf.keras API.
Returns:
A tf.keras.Model. |
481 | 479 | mnist_datasets | tensorflow/tensorflow/examples/tf2_showcase/mnist.py | 115 | function | |
482 | 480 | loss | tensorflow/tensorflow/examples/tf2_showcase/mnist.py | 125 | function | |
483 | 481 | compute_accuracy | tensorflow/tensorflow/examples/tf2_showcase/mnist.py | 131 | function | |
484 | 482 | train | tensorflow/tensorflow/examples/tf2_showcase/mnist.py | 140 | function | Trains model on `dataset` using `optimizer`. |
485 | 483 | test | tensorflow/tensorflow/examples/tf2_showcase/mnist.py | 166 | function | Perform an evaluation of `model` on the examples from `dataset`. |
486 | 484 | train_and_export | tensorflow/tensorflow/examples/tf2_showcase/mnist.py | 184 | function | Run MNIST training and eval loop in eager mode.
Args:
flags_obj: An object containing parsed flag values. |
487 | 485 | import_and_eval | tensorflow/tensorflow/examples/tf2_showcase/mnist.py | 237 | function | |
488 | 486 | apply_clean | tensorflow/tensorflow/examples/tf2_showcase/mnist.py | 247 | function | |
489 | 487 | main | tensorflow/tensorflow/examples/tf2_showcase/mnist.py | 254 | function | |
490 | 488 | placeholder_inputs | tensorflow/tensorflow/examples/tutorials/mnist/fully_connected_feed.py | 37 | function | Generate placeholder variables to represent the input tensors.
These placeholders are used as inputs by the rest of the model building
code and will be fed from the downloaded data in the .run() loop, below.
Args:
batch_size: The batch size will be baked into both placeholders.
Returns:
images_placeholder: Images placeholder.
labels_placeholder: Labels placeholder. |
491 | 489 | fill_feed_dict | tensorflow/tensorflow/examples/tutorials/mnist/fully_connected_feed.py | 59 | function | Fills the feed_dict for training the given step.
A feed_dict takes the form of:
feed_dict = {
<placeholder>: <tensor of values to be passed for placeholder>,
....
}
Args:
data_set: The set of images and labels, from input_data.read_data_sets()
images_pl: The images placeholder, from placeholder_inputs().
labels_pl: The labels placeholder, from placeholder_inputs().
Returns:
feed_dict: The feed dictionary mapping from placeholders to values. |
492 | 490 | do_eval | tensorflow/tensorflow/examples/tutorials/mnist/fully_connected_feed.py | 87 | function | Runs one evaluation against the full epoch of data.
Args:
sess: The session in which the model has been trained.
eval_correct: The Tensor that returns the number of correct predictions.
images_placeholder: The images placeholder.
labels_placeholder: The labels placeholder.
data_set: The set of images and labels to evaluate, from
input_data.read_data_sets(). |
493 | 491 | run_training | tensorflow/tensorflow/examples/tutorials/mnist/fully_connected_feed.py | 116 | function | Train MNIST for a number of steps. |
494 | 492 | main | tensorflow/tensorflow/examples/tutorials/mnist/fully_connected_feed.py | 218 | function | |
495 | 493 | _read32 | tensorflow/tensorflow/examples/tutorials/mnist/input_data.py | 43 | function | |
496 | 494 | _extract_images | tensorflow/tensorflow/examples/tutorials/mnist/input_data.py | 49 | function | Extract the images into a 4D uint8 numpy array [index, y, x, depth].
Args:
f: A file object that can be passed into a gzip reader.
Returns:
data: A 4D uint8 numpy array [index, y, x, depth].
Raises:
ValueError: If the bytestream does not start with 2051. |
497 | 495 | _dense_to_one_hot | tensorflow/tensorflow/examples/tutorials/mnist/input_data.py | 78 | function | Convert class labels from scalars to one-hot vectors. |
498 | 496 | _extract_labels | tensorflow/tensorflow/examples/tutorials/mnist/input_data.py | 88 | function | Extract the labels into a 1D uint8 numpy array [index].
Args:
f: A file object that can be passed into a gzip reader.
one_hot: Does one hot encoding for the result.
num_classes: Number of classes for the one hot encoding.
Returns:
labels: a 1D uint8 numpy array.
Raises:
ValueError: If the bystream doesn't start with 2049. |
499 | 497 | _DataSet | tensorflow/tensorflow/examples/tutorials/mnist/input_data.py | 116 | class | Container class for a _DataSet (deprecated).
THIS CLASS IS DEPRECATED. |
500 | 498 | _maybe_download | tensorflow/tensorflow/examples/tutorials/mnist/input_data.py | 242 | function | Download the data from source url, unless it's already here.
Args:
filename: string, name of the file in the directory.
work_directory: string, path to working directory.
source_url: url to download from if file doesn't exist.
Returns:
Path to resulting file. |
501 | 499 | read_data_sets | tensorflow/tensorflow/examples/tutorials/mnist/input_data.py | 266 | function | |
502 | 500 | inference | tensorflow/tensorflow/examples/tutorials/mnist/mnist.py | 45 | function | Build the MNIST model up to where it may be used for inference.
Args:
images: Images placeholder, from inputs().
hidden1_units: Size of the first hidden layer.
hidden2_units: Size of the second hidden layer.
Returns:
softmax_linear: Output tensor with the computed logits. |
503 | 501 | loss | tensorflow/tensorflow/examples/tutorials/mnist/mnist.py | 86 | function | Calculates the loss from the logits and the labels.
Args:
logits: Logits tensor, float - [batch_size, NUM_CLASSES].
labels: Labels tensor, int32 - [batch_size].
Returns:
loss: Loss tensor of type float. |
504 | 502 | training | tensorflow/tensorflow/examples/tutorials/mnist/mnist.py | 101 | function | Sets up the training Ops.
Creates a summarizer to track the loss over time in TensorBoard.
Creates an optimizer and applies the gradients to all trainable variables.
The Op returned by this function is what must be passed to the
`sess.run()` call to cause the model to train.
Args:
loss: Loss tensor, from loss().
learning_rate: The learning rate to use for gradient descent.
Returns:
train_op: The Op for training. |
505 | 503 | evaluation | tensorflow/tensorflow/examples/tutorials/mnist/mnist.py | 130 | function | Evaluate the quality of the logits at predicting the label.
Args:
logits: Logits tensor, float - [batch_size, NUM_CLASSES].
labels: Labels tensor, int32 - [batch_size], with values in the
range [0, NUM_CLASSES).
Returns:
A scalar int32 tensor with the number of examples (out of batch_size)
that were predicted correctly. |
506 | 504 | main | tensorflow/tensorflow/examples/tutorials/mnist/mnist_softmax_xla.py | 34 | function | |
507 | 505 | train | tensorflow/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py | 38 | function | |
508 | 506 | main | tensorflow/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py | 185 | function | |
509 | 507 | _hash_file | tensorflow/tensorflow/examples/tutorials/word2vec/word2vec_basic.py | 41 | function | |
510 | 508 | word2vec_basic | tensorflow/tensorflow/examples/tutorials/word2vec/word2vec_basic.py | 49 | function | Example of building, training and visualizing a word2vec model. |
511 | 509 | main | tensorflow/tensorflow/examples/tutorials/word2vec/word2vec_basic.py | 360 | function | |
512 | 510 | suppress_exception | tensorflow/tensorflow/lite/examples/experimental_new_converter/stack_trace_example.py | 37 | function | |
513 | 511 | TestModule | tensorflow/tensorflow/lite/examples/experimental_new_converter/stack_trace_example.py | 46 | class | The test model has unsupported op. |
514 | 512 | test_from_saved_model | tensorflow/tensorflow/lite/examples/experimental_new_converter/stack_trace_example.py | 57 | function | displaying stack trace when converting saved model. |
515 | 513 | test_from_concrete_function | tensorflow/tensorflow/lite/examples/experimental_new_converter/stack_trace_example.py | 71 | function | displaying stack trace when converting concrete function. |
516 | 514 | main | tensorflow/tensorflow/lite/examples/experimental_new_converter/stack_trace_example.py | 83 | function | |
517 | 515 | load_labels | tensorflow/tensorflow/lite/examples/python/label_image.py | 29 | function | |
518 | 516 | _convert_bytes_to_cc_source | tensorflow/tensorflow/lite/experimental/acceleration/compatibility/convert_binary_to_cc_source.py | 35 | function | Returns strings representing a C++ constant array containing `data`.
Args:
data: Byte array that will be converted into a C++ constant.
array_name: String to use as the variable name for the constant array.
max_line_width: The longest line length, for formatting purposes.
include_guard: Name to use for the include guard macro definition.
include_path: Optional path to include in the source file.
use_tensorflow_license: Whether to include the standard TensorFlow Apache2
license in the generated files.
Returns:
Text that can be compiled as a C++ source file to link in the data as a
literal array of values.
Text that can be used as a C++ header file to reference the literal array. |
519 | 517 | main | tensorflow/tensorflow/lite/experimental/acceleration/compatibility/convert_binary_to_cc_source.py | 155 | function | |
520 | 518 | BidirectionalSequenceLstmTest | tensorflow/tensorflow/lite/experimental/examples/lstm/bidirectional_sequence_lstm_test.py | 36 | class | |
521 | 519 | BidirectionalSequenceRnnTest | tensorflow/tensorflow/lite/experimental/examples/lstm/bidirectional_sequence_rnn_test.py | 38 | class | |
522 | 520 | dynamic_rnn | tensorflow/tensorflow/lite/experimental/examples/lstm/rnn.py | 42 | function | Creates a recurrent neural network specified by RNNCell `cell`.
Performs fully dynamic unrolling of `inputs`.
Example:
```python
# create a BasicRNNCell
rnn_cell = tf.compat.v1.nn.rnn_cell.BasicRNNCell(hidden_size)
# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size]
# defining initial state
initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32)
# 'state' is a tensor of shape [batch_size, cell_state_size]
outputs, state = tf.compat.v1.nn.dynamic_rnn(rnn_cell, input_data,
initial_state=initial_state,
dtype=tf.float32)
```
```python
# create 2 LSTMCells
rnn_layers = [tf.compat.v1.nn.rnn_cell.LSTMCell(size) for size in [128, 256]]
# create a RNN cell composed sequentially of a number of RNNCells
multi_rnn_cell = tf.compat.v1.nn.rnn_cell.MultiRNNCell(rnn_layers)
# 'outputs' is a tensor of shape [batch_size, max_time, 256]
# 'state' is a N-tuple where N is the number of LSTMCells containing a
# tf.nn.rnn_cell.LSTMStateTuple for each cell
outputs, state = tf.compat.v1.nn.dynamic_rnn(cell=multi_rnn_cell,
inputs=data,
dtype=tf.float32)
```
Args:
cell: An instance of RNNCell.
inputs: The RNN inputs.
If `time_major == False` (default), this must be a `Tensor` of shape:
`[batch_size, max_time, ...]`, or a nested tuple of such elements.
If `time_major == True`, this must be a `Tensor` of shape: `[max_time,
batch_size, ...]`, or a nested tuple of such elements. This may also be
a (possibly nested) tuple of Tensors satisfying this property. The
first two dimensions must match across all the inputs, but otherwise the
ranks and other shape components may differ. In this case, input to
`cell` at each time-step will replicate the structure of these tuples,
except for the time dimension (from which the time is taken). The input
to `cell` at each time step will be a `Tensor` or (possibly nested)
tuple of Tensors each with dimensions `[batch_size, ...]`.
sequence_length: (optional) An int32/int64 vector sized `[batch_size]`. Used
to copy-through state and zero-out outputs when past a batch element's
sequence length. So it's more for performance than correctness.
initial_state: (optional) An initial state for the RNN. If `cell.state_size`
is an integer, this must be a `Tensor` of appropriate type and shape
`[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this
should be a tuple of tensors having shapes `[batch_size, s] for s in
cell.state_size`.
dtype: (optional) The data type for the initial state and expected output.
Required if initial_state is not provided or RNN state has a heterogeneous
dtype.
parallel_iterations: (Default: 32). The number of iterations to run in
parallel. Those operations which do not have any temporal dependency and
can be run in parallel, will be. This parameter trades off time for
space. Values >> 1 use more memory but take less time, while smaller
values use less memory but computations take longer.
swap_memory: Transparently swap the tensors produced in forward inference
but needed for back prop from GPU to CPU. This allows training RNNs which
would typically not fit on a single GPU, with very minimal (or no)
performance penalty.
time_major: The shape format of the `inputs` and `outputs` Tensors. If true,
these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false,
these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using
`time_major = True` is a bit more efficient because it avoids transposes
at the beginning and end of the RNN calculation. However, most TensorFlow
data is batch-major, so by default this function accepts input and emits
output in batch-major form.
scope: VariableScope for the created subgraph; defaults to "rnn".
Returns:
A pair (outputs, state) where:
outputs: The RNN output `Tensor`.
If time_major == False (default), this will be a `Tensor` shaped:
`[batch_size, max_time, cell.output_size]`.
If time_major == True, this will be a `Tensor` shaped:
`[max_time, batch_size, cell.output_size]`.
Note, if `cell.output_size` is a (possibly nested) tuple of integers
or `TensorShape` objects, then `outputs` will be a tuple having the
same structure as `cell.output_size`, containing Tensors having shapes
corresponding to the shape data in `cell.output_size`.
state: The final state. If `cell.state_size` is an int, this
will be shaped `[batch_size, cell.state_size]`. If it is a
`TensorShape`, this will be shaped `[batch_size] + cell.state_size`.
If it is a (possibly nested) tuple of ints or `TensorShape`, this will
be a tuple having the corresponding shapes. If cells are `LSTMCells`
`state` will be a tuple containing a `LSTMStateTuple` for each cell.
Raises:
TypeError: If `cell` is not an instance of RNNCell.
ValueError: If inputs is None or an empty list.
RuntimeError: If not using control flow v2. |
523 | 521 | bidirectional_dynamic_rnn | tensorflow/tensorflow/lite/experimental/examples/lstm/rnn.py | 279 | function | Creates a dynamic version of bidirectional recurrent neural network.
Takes input and builds independent forward and backward RNNs. The input_size
of forward and backward cell must match. The initial state for both directions
is zero by default (but can be set optionally) and no intermediate states are
ever returned -- the network is fully unrolled for the given (passed in)
length(s) of the sequence(s) or completely unrolled if length(s) is not
given.
Args:
cell_fw: An instance of RNNCell, to be used for forward direction.
cell_bw: An instance of RNNCell, to be used for backward direction.
inputs: The RNN inputs.
If time_major == False (default), this must be a tensor of shape:
`[batch_size, max_time, ...]`, or a nested tuple of such elements.
If time_major == True, this must be a tensor of shape: `[max_time,
batch_size, ...]`, or a nested tuple of such elements.
sequence_length: (optional) An int32/int64 vector, size `[batch_size]`,
containing the actual lengths for each of the sequences in the batch. If
not provided, all batch entries are assumed to be full sequences; and time
reversal is applied from time `0` to `max_time` for each sequence.
initial_state_fw: (optional) An initial state for the forward RNN. This must
be a tensor of appropriate type and shape `[batch_size,
cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a
tuple of tensors having shapes `[batch_size, s] for s in
cell_fw.state_size`.
initial_state_bw: (optional) Same as for `initial_state_fw`, but using the
corresponding properties of `cell_bw`.
dtype: (optional) The data type for the initial states and expected output.
Required if initial_states are not provided or RNN states have a
heterogeneous dtype.
parallel_iterations: (Default: 32). The number of iterations to run in
parallel. Those operations which do not have any temporal dependency and
can be run in parallel, will be. This parameter trades off time for
space. Values >> 1 use more memory but take less time, while smaller
values use less memory but computations take longer.
swap_memory: Transparently swap the tensors produced in forward inference
but needed for back prop from GPU to CPU. This allows training RNNs which
would typically not fit on a single GPU, with very minimal (or no)
performance penalty.
time_major: The shape format of the `inputs` and `outputs` Tensors. If true,
these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false,
these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using
`time_major = True` is a bit more efficient because it avoids transposes
at the beginning and end of the RNN calculation. However, most TensorFlow
data is batch-major, so by default this function accepts input and emits
output in batch-major form.
scope: VariableScope for the created subgraph; defaults to
"bidirectional_rnn"
Returns:
A tuple (outputs, output_states) where:
outputs: A tuple (output_fw, output_bw) containing the forward and
the backward rnn output `Tensor`.
If time_major == False (default),
output_fw will be a `Tensor` shaped:
`[batch_size, max_time, cell_fw.output_size]`
and output_bw will be a `Tensor` shaped:
`[batch_size, max_time, cell_bw.output_size]`.
If time_major == True,
output_fw will be a `Tensor` shaped:
`[max_time, batch_size, cell_fw.output_size]`
and output_bw will be a `Tensor` shaped:
`[max_time, batch_size, cell_bw.output_size]`.
It returns a tuple instead of a single concatenated `Tensor`, unlike
in the `bidirectional_rnn`. If the concatenated one is preferred,
the forward and backward outputs can be concatenated as
`tf.concat(outputs, 2)`.
output_states: A tuple (output_state_fw, output_state_bw) containing
the forward and the backward final states of bidirectional rnn.
Raises:
TypeError: If `cell_fw` or `cell_bw` is not an instance of `RNNCell`. |
524 | 522 | TfLiteRNNCell | tensorflow/tensorflow/lite/experimental/examples/lstm/rnn_cell.py | 39 | class | The most basic RNN cell.
This is used only for TfLite, it provides hints and it also makes the
variables in the desired for the tflite ops. |
525 | 523 | TFLiteLSTMCell | tensorflow/tensorflow/lite/experimental/examples/lstm/rnn_cell.py | 162 | class | Long short-term memory unit (LSTM) recurrent network cell.
This is used only for TfLite, it provides hints and it also makes the
variables in the desired for the tflite ops (transposed and separated).
The default non-peephole implementation is based on:
https://pdfs.semanticscholar.org/1154/0131eae85b2e11d53df7f1360eeb6476e7f4.pdf
Felix Gers, Jurgen Schmidhuber, and Fred Cummins.
"Learning to forget: Continual prediction with LSTM." IET, 850-855, 1999.
The peephole implementation is based on:
https://research.google.com/pubs/archive/43905.pdf
Hasim Sak, Andrew Senior, and Francoise Beaufays.
"Long short-term memory recurrent neural network architectures for
large scale acoustic modeling." INTERSPEECH, 2014.
The class uses optional peep-hole connections, optional cell clipping, and
an optional projection layer.
Note that this cell is not optimized for performance. Please use
`tf.contrib.cudnn_rnn.CudnnLSTM` for better performance on GPU, or
`tf.contrib.rnn.LSTMBlockCell` and `tf.contrib.rnn.LSTMBlockFusedCell` for
better performance on CPU. |
526 | 524 | UnidirectionalSequenceLstmTest | tensorflow/tensorflow/lite/experimental/examples/lstm/unidirectional_sequence_lstm_test.py | 36 | class | |
527 | 525 | UnidirectionalSequenceRnnTest | tensorflow/tensorflow/lite/experimental/examples/lstm/unidirectional_sequence_rnn_test.py | 37 | class | |
528 | 526 | AudioFeatureGenerationTest | tensorflow/tensorflow/lite/experimental/microfrontend/python/kernel_tests/audio_microfrontend_op_test.py | 35 | class | |
529 | 527 | audio_microfrontend | tensorflow/tensorflow/lite/experimental/microfrontend/python/ops/audio_microfrontend_op.py | 34 | function | Audio Microfrontend Op.
This Op converts a sequence of audio data into one or more
feature vectors containing filterbanks of the input. The
conversion process uses a lightweight library to perform:
1. A slicing window function
2. Short-time FFTs
3. Filterbank calculations
4. Noise reduction
5. PCAN Auto Gain Control
6. Logarithmic scaling
Args:
audio: 1D Tensor, int16 audio data in temporal ordering.
sample_rate: Integer, the sample rate of the audio in Hz.
window_size: Integer, length of desired time frames in ms.
window_step: Integer, length of step size for the next frame in ms.
num_channels: Integer, the number of filterbank channels to use.
upper_band_limit: Float, the highest frequency included in the filterbanks.
lower_band_limit: Float, the lowest frequency included in the filterbanks.
smoothing_bits: Int, scale up signal by 2^(smoothing_bits) before reduction.
even_smoothing: Float, smoothing coefficient for even-numbered channels.
odd_smoothing: Float, smoothing coefficient for odd-numbered channels.
min_signal_remaining: Float, fraction of signal to preserve in smoothing.
enable_pcan: Bool, enable PCAN auto gain control.
pcan_strength: Float, gain normalization exponent.
pcan_offset: Float, positive value added in the normalization denominator.
gain_bits: Int, number of fractional bits in the gain.
enable_log: Bool, enable logarithmic scaling of filterbanks.
scale_shift: Integer, scale filterbanks by 2^(scale_shift).
left_context: Integer, number of preceding frames to attach to each frame.
right_context: Integer, number of preceding frames to attach to each frame.
frame_stride: Integer, M frames to skip over, where output[n] = frame[n*M].
zero_padding: Bool, if left/right context is out-of-bounds, attach frame of
zeroes. Otherwise, frame[0] or frame[size-1] will be copied.
out_scale: Integer, divide all filterbanks by this number.
out_type: DType, type of the output Tensor, defaults to UINT16.
Returns:
filterbanks: 2D Tensor, each row is a time frame, each column is a channel.
Raises:
ValueError: If the audio tensor is not explicitly a vector. |
530 | 528 | SupportedOp | tensorflow/tensorflow/lite/experimental/tensorboard/ops_util.py | 26 | class | Spec of supported ops.
Args:
op: string of op name. |
531 | 529 | get_potentially_supported_ops | tensorflow/tensorflow/lite/experimental/tensorboard/ops_util.py | 35 | function | Returns operations potentially supported by TensorFlow Lite.
The potentially support list contains a list of ops that are partially or
fully supported, which is derived by simply scanning op names to check whether
they can be handled without real conversion and specific parameters.
Given that some ops may be partially supported, the optimal way to determine
if a model's operations are supported is by converting using the TensorFlow
Lite converter.
Returns:
A list of SupportedOp. |
532 | 530 | OpsUtilTest | tensorflow/tensorflow/lite/experimental/tensorboard/ops_util_test.py | 24 | class | |
533 | 531 | main | tensorflow/tensorflow/lite/g3doc/tools/build_java_api_docs.py | 53 | function | |
534 | 532 | main | tensorflow/tensorflow/lite/g3doc/tools/build_py_api_docs.py | 55 | function | |
535 | 533 | time_wrapping | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_augmentation.py | 29 | function | Generate (molecule/denominator)x speed data. |
536 | 534 | augment_data | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_augmentation.py | 43 | function | Perform data augmentation. |
537 | 535 | TestAugmentation | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_augmentation_test.py | 32 | class | |
538 | 536 | DataLoader | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_load.py | 35 | class | Loads data and prepares for training. |
539 | 537 | TestLoad | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_load_test.py | 30 | class | |
540 | 538 | prepare_original_data | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_prepare.py | 46 | function | Read collected data from files. |
541 | 539 | generate_negative_data | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_prepare.py | 86 | function | Generate negative data labeled as 'negative6~8'. |
542 | 540 | write_data | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_prepare.py | 143 | function | |
543 | 541 | TestPrepare | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_prepare_test.py | 32 | class | |
544 | 542 | read_data | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_split.py | 40 | function | |
545 | 543 | split_data | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_split.py | 51 | function | Splits data into train, validation and test according to ratio. |
546 | 544 | person_split | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_split_person.py | 41 | function | Split data by person. |
547 | 545 | TestSplitPerson | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_split_person_test.py | 28 | class | |
548 | 546 | TestSplit | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_split_test.py | 29 | class | |
549 | 547 | reshape_function | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/train.py | 37 | function | |
550 | 548 | calculate_model_size | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/train.py | 42 | function | |
551 | 549 | build_cnn | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/train.py | 51 | function | Builds a convolutional neural network in Keras. |
552 | 550 | build_lstm | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/train.py | 78 | function | Builds an LSTM in Keras. |
553 | 551 | load_data | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/train.py | 93 | function | |
554 | 552 | build_net | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/train.py | 101 | function | |
555 | 553 | train_net | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/train.py | 111 | function | Trains the model. |
556 | 554 | TestTrain | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/train_test.py | 33 | class | |
557 | 555 | to_cc | tensorflow/tensorflow/lite/micro/examples/micro_speech/CMSIS/create_constants.py | 26 | function | Writes table values to a C++ source file. |
558 | 556 | to_h | tensorflow/tensorflow/lite/micro/examples/micro_speech/CMSIS/create_constants.py | 44 | function | Writes a header file for the table values. |
559 | 557 | new_data_to_array | tensorflow/tensorflow/lite/micro/examples/micro_speech/apollo3/captured_data_to_wav.py | 28 | function | |
560 | 558 | new_data_to_array | tensorflow/tensorflow/lite/micro/examples/micro_speech/apollo3/compare_1k.py | 29 | function | Converts file information to an in-memory array. |
561 | 559 | to_float | tensorflow/tensorflow/lite/micro/examples/micro_speech/apollo3/compare_1k.py | 63 | function | |
562 | 560 | check_file_existence | tensorflow/tensorflow/lite/micro/examples/person_detection/utils/raw_to_bitmap.py | 52 | function | |
563 | 561 | show_and_save_bitmaps | tensorflow/tensorflow/lite/micro/examples/person_detection/utils/raw_to_bitmap.py | 60 | function | Display and save a list of bitmaps.
Args:
input_file: input file name
bitmap_list: list of numpy arrays to represent bitmap images
channels: color channel count |
564 | 562 | reshape_bitmaps | tensorflow/tensorflow/lite/micro/examples/person_detection/utils/raw_to_bitmap.py | 87 | function | Reshape flat integer arrays.
Args:
frame_list: list of 1-D arrays to represent raw image data
width: image width in pixels
height: image height in pixels
channels: color channel count
Returns:
list of numpy arrays to represent bitmap images |
565 | 563 | parse_file | tensorflow/tensorflow/lite/micro/examples/person_detection/utils/raw_to_bitmap.py | 109 | function | Convert log file to array of pixels.
Args:
inputfile: log file to parse
width: image width in pixels
height: image height in pixels
channels: color channel count
Returns:
list 1-D arrays to represent raw image data. |
566 | 564 | main | tensorflow/tensorflow/lite/micro/examples/person_detection/utils/raw_to_bitmap.py | 159 | function | |
567 | 565 | RawToBitmapTest | tensorflow/tensorflow/lite/micro/examples/person_detection/utils/raw_to_bitmap_test.py | 94 | class | |
568 | 566 | generate_conv_model | tensorflow/tensorflow/lite/micro/testing/generate_test_models.py | 34 | function | Creates a basic Keras model and converts to tflite.
This model does not make any relevant classifications. It only exists to
generate a model that is designed to run on embedded devices. |
569 | 567 | main | tensorflow/tensorflow/lite/micro/testing/generate_test_models.py | 74 | function | |
570 | 568 | rename_example_subfolder_files | tensorflow/tensorflow/lite/micro/tools/make/fix_arduino_subfolders.py | 29 | function | Moves source files in example subfolders to equivalents at root. |
571 | 569 | move_person_data | tensorflow/tensorflow/lite/micro/tools/make/fix_arduino_subfolders.py | 41 | function | Moves the downloaded person model into the examples folder. |
572 | 570 | move_person_data_experimental | tensorflow/tensorflow/lite/micro/tools/make/fix_arduino_subfolders.py | 61 | function | Moves the downloaded person model into the examples folder. |
573 | 571 | move_image_data_experimental | tensorflow/tensorflow/lite/micro/tools/make/fix_arduino_subfolders.py | 83 | function | Moves the downloaded image detection model into the examples folder. |
574 | 572 | rename_example_main_inos | tensorflow/tensorflow/lite/micro/tools/make/fix_arduino_subfolders.py | 104 | function | Makes sure the .ino sketch files match the example name. |
575 | 573 | main | tensorflow/tensorflow/lite/micro/tools/make/fix_arduino_subfolders.py | 114 | function | Control the rewriting of source files. |
576 | 574 | parse_args | tensorflow/tensorflow/lite/micro/tools/make/fix_arduino_subfolders.py | 124 | function | Converts the raw arguments into accessible flags. |
577 | 575 | sanitize_xml | tensorflow/tensorflow/lite/micro/tools/make/generate_keil_project.py | 29 | function | Uses a allowlist to avoid generating bad XML. |
578 | 576 | main | tensorflow/tensorflow/lite/micro/tools/make/generate_keil_project.py | 34 | function | Generates a Keil project file from a template source. |
579 | 577 | parse_args | tensorflow/tensorflow/lite/micro/tools/make/generate_keil_project.py | 82 | function | Converts the raw arguments into accessible flags. |
580 | 578 | main | tensorflow/tensorflow/lite/micro/tools/make/merge_arduino_zips.py | 27 | function | Merges multiple Arduino zipfiles into a single result. |
581 | 579 | parse_args | tensorflow/tensorflow/lite/micro/tools/make/merge_arduino_zips.py | 39 | function | Converts the raw arguments into accessible flags. |
582 | 580 | replace_includes | tensorflow/tensorflow/lite/micro/tools/make/transform_arduino_source.py | 29 | function | Updates any includes to reference the new Arduino library paths. |
583 | 581 | replace_main | tensorflow/tensorflow/lite/micro/tools/make/transform_arduino_source.py | 43 | function | Updates any occurrences of a bare main definition to the Arduino equivalent. |
584 | 582 | check_ino_functions | tensorflow/tensorflow/lite/micro/tools/make/transform_arduino_source.py | 51 | function | Ensures the required functions exist. |
585 | 583 | add_example_ino_library_include | tensorflow/tensorflow/lite/micro/tools/make/transform_arduino_source.py | 65 | function | Makes sure the example includes the header that loads the library. |
586 | 584 | replace_example_includes | tensorflow/tensorflow/lite/micro/tools/make/transform_arduino_source.py | 71 | function | Updates any includes for local example files. |
587 | 585 | main | tensorflow/tensorflow/lite/micro/tools/make/transform_arduino_source.py | 85 | function | Transforms the input source file to work when exported to Arduino. |
588 | 586 | parse_args | tensorflow/tensorflow/lite/micro/tools/make/transform_arduino_source.py | 108 | function | Converts the raw arguments into accessible flags. |
589 | 587 | replace_arduino_includes | tensorflow/tensorflow/lite/micro/tools/make/transform_source.py | 36 | function | Updates any includes to reference the new Arduino library paths. |
590 | 588 | replace_arduino_main | tensorflow/tensorflow/lite/micro/tools/make/transform_source.py | 50 | function | Updates any occurrences of a bare main definition to the Arduino equivalent. |
591 | 589 | check_ino_functions | tensorflow/tensorflow/lite/micro/tools/make/transform_source.py | 58 | function | Ensures the required functions exist. |
592 | 590 | add_example_ino_library_include | tensorflow/tensorflow/lite/micro/tools/make/transform_source.py | 72 | function | Makes sure the example includes the header that loads the library. |
593 | 591 | replace_arduino_example_includes | tensorflow/tensorflow/lite/micro/tools/make/transform_source.py | 78 | function | Updates any includes for local example files. |
594 | 592 | replace_esp_example_includes | tensorflow/tensorflow/lite/micro/tools/make/transform_source.py | 92 | function | Updates any includes for local example files. |
595 | 593 | transform_arduino_sources | tensorflow/tensorflow/lite/micro/tools/make/transform_source.py | 109 | function | Transform sources for the Arduino platform.
Args:
input_lines: A sequence of lines from the input file to process.
flags: Flags indicating which transformation(s) to apply.
Returns:
The transformed output as a string. |
596 | 594 | transform_esp_sources | tensorflow/tensorflow/lite/micro/tools/make/transform_source.py | 138 | function | Transform sources for the ESP-IDF platform.
Args:
input_lines: A sequence of lines from the input file to process.
flags: Flags indicating which transformation(s) to apply.
Returns:
The transformed output as a string. |
597 | 595 | main | tensorflow/tensorflow/lite/micro/tools/make/transform_source.py | 158 | function | Transforms the input source file to work when exported as example. |
598 | 596 | parse_args | tensorflow/tensorflow/lite/micro/tools/make/transform_source.py | 171 | function | Converts the raw arguments into accessible flags. |
599 | 597 | _requires_input_stats | tensorflow/tensorflow/lite/python/convert.py | 49 | function | |
600 | 598 | _try_convert_to_unicode | tensorflow/tensorflow/lite/python/convert.py | 67 | function | |
601 | 599 | OpsSet | tensorflow/tensorflow/lite/python/convert.py | 80 | class | Enum class defining the sets of ops available to generate TFLite models.
WARNING: Experimental interface, subject to change. |
602 | 600 | ConverterError | tensorflow/tensorflow/lite/python/convert.py | 120 | class | Raised when an error occurs during model conversion. |
603 | 601 | mlir_quantize | tensorflow/tensorflow/lite/python/convert.py | 125 | function | Quantize `input_data_str` with calibration results.
Args:
input_data_str: Input data in serialized form (e.g. a TFLITE model with
calibration results).
disable_per_channel: Bool indicating whether to do per-channel or per-tensor
quantization
fully_quantize: Bool indicating whether to fully quantize the model. Besides
model body, the input/output will be quantized as well.
inference_type: Data type for the activations. The default value is int8.
Returns:
Quantized model in serialized form (e.g. a TFLITE model) with floating-point
inputs and outputs. |
604 | 602 | mlir_sparsify | tensorflow/tensorflow/lite/python/convert.py | 150 | function | Sparsify `input_data_str` to encode sparse tensor with proper format.
Args:
input_data_str: Input data in serialized form (e.g. a TFLITE model).
Returns:
Sparsified model in serialized form (e.g. a TFLITE model). |
605 | 603 | toco_convert_protos | tensorflow/tensorflow/lite/python/convert.py | 162 | function | Convert `input_data_str` according to model and toco parameters.
Unless you know what you are doing consider using
the more friendly `tf.compat.v1.lite.toco_convert`.
Args:
model_flags_str: Serialized proto describing model properties, see
`toco/model_flags.proto`.
toco_flags_str: Serialized proto describing conversion properties, see
`toco/toco_flags.proto`.
input_data_str: Input data in serialized form (e.g. a graphdef is common)
debug_info_str: Serialized `GraphDebugInfo` proto describing logging
information. (default None)
enable_mlir_converter: Enables MLIR-based conversion instead of the default
TOCO conversion. (default False)
Returns:
Converted model in serialized form (e.g. a TFLITE model is common).
Raises:
ConverterError: When conversion fails in TFLiteConverter, usually due to
ops not being supported.
RuntimeError: When conversion fails, an exception is raised with the error
message embedded. |
606 | 604 | build_toco_convert_protos | tensorflow/tensorflow/lite/python/convert.py | 291 | function | Builds protocol buffers describing a conversion of a model using TOCO.
Typically this is to convert from TensorFlow GraphDef to TFLite, in which
case the default `input_format` and `output_format` are sufficient.
Args:
input_tensors: List of input tensors. Type and shape are computed using
`foo.shape` and `foo.dtype`.
output_tensors: List of output tensors (only .name is used from this).
inference_type: Target data type of real-number arrays in the output file.
Must be `{tf.float32, tf.uint8, tf.int8}`. (default tf.float32)
inference_input_type: Target data type of real-number input arrays. Allows
for a different type for input arrays in the case of quantization. Must be
`{tf.float32, tf.uint8, tf.int8}`. (default `inference_type`)
input_format: Type of data to read Currently must be
`{TENSORFLOW_GRAPHDEF}`. (default TENSORFLOW_GRAPHDEF)
input_shapes: Input array shape. It needs to be a list of the same length as
`input_tensors`, or None. (default None)
output_format: Output file format. Currently must be `{TFLITE,
GRAPHVIZ_DOT}`. (default TFLITE)
quantized_input_stats: List of tuples of floats representing the mean and
standard deviation. Each tuple maps to the corresponding input tensor.
Only need if `inference_input_type` is `QUANTIZED_UINT8` or `INT8`.
real_input_value = (quantized_input_value - mean_value) / std_dev_value.
(default None)
default_ranges_stats: Tuple of integers representing (min, max) range values
for all arrays without a specified range. Intended for experimenting with
quantization via "dummy quantization". (default None)
drop_control_dependency: Boolean indicating whether to drop control
dependencies silently. This is due to TFLite not supporting control
dependencies. (default True)
reorder_across_fake_quant: Boolean indicating whether to reorder FakeQuant
nodes in unexpected locations. Used when the location of the FakeQuant
nodes is preventing graph transformations necessary to convert the graph.
Results in a graph that differs from the quantized training graph,
potentially causing differing arithmetic behavior. (default False)
allow_custom_ops: Boolean indicating whether to allow custom operations.
When false any unknown operation is an error. When true, custom ops are
created for any op that is unknown. The developer will need to provide
these to the TensorFlow Lite runtime with a custom resolver. (default
False)
custom_opdefs: List of strings representing custom ops OpDefs that are
included in the GraphDef. Required when using custom operations with the
MLIR-based converter. (default None)
change_concat_input_ranges: Boolean to change behavior of min/max ranges for
inputs and outputs of the concat operator for quantized models. Changes
the ranges of concat operator overlap when true. (default False)
post_training_quantize: Boolean indicating whether to quantize the weights
of the converted float model. Model size will be reduced and there will be
latency improvements (at the cost of accuracy). (default False)
quantize_to_float16: Boolean indicating whether to convert float buffers to
float16. (default False)
dump_graphviz_dir: Full filepath of folder to dump the graphs at various
stages of processing GraphViz .dot files. Preferred over
--output_format=GRAPHVIZ_DOT in order to keep the requirements of the
output file. (default None)
dump_graphviz_video: Boolean indicating whether to dump the graph after
every graph transformation. (default False)
target_ops: Experimental flag, subject to change. Set of OpsSet options
indicating which converter to use. (default set([OpsSet.TFLITE_BUILTINS]))
allow_nonexistent_arrays: Allow specifying array names that don't exist or
are unused in the final graph. (default False)
debug_info: `GraphDebugInfo` proto containing the stack traces for the
original nodes referred by the converted graph.
conversion_summary_dir: A string, the path to the generated conversion logs.
saved_model_dir: Filepath of the saved model to be converted. This value
will be non-empty only when the saved model import path will be used.
Otherwises, the graph def-based conversion will be processed.
saved_model_version: SavedModel file format version of The saved model file
to be converted. This value will be set only when the SavedModel import
path will be used.
saved_model_tags: Set of string saved model tags, formatted in the
comma-separated value. This value will be set only when the SavedModel
import path will be used.
saved_model_exported_names: Names to be exported (default: export all) when
the saved model import path is on. This value will be set only when the
SavedModel import path will be used.
Returns:
model_flags, toco_flags, debug_info: three protocol buffers describing the
conversion process and debug information.
Raises:
ValueError:
If the input tensor type is unknown
Missing mean_values or std_dev_values
RuntimeError: If TOCO fails to convert (in which case the runtime error's
error text will contain the TOCO error log) |
607 | 605 | toco_convert_graph_def | tensorflow/tensorflow/lite/python/convert.py | 485 | function | "Convert a model using TOCO.
This function is used to convert GraphDefs that cannot be loaded into
TensorFlow to TFLite. Conversion can be customized by providing arguments
that are forwarded to `build_toco_convert_protos` (see documentation for
details).
Args:
input_data: Input data (i.e. often `sess.graph_def`),
input_arrays_with_shape: Tuple of strings representing input tensor names
and list of integers representing input shapes
(e.g., [("foo" : [1, 16, 16, 3])]). Use only when graph cannot be loaded
into TensorFlow and when `input_tensors` is None. (default None)
output_arrays: List of output tensors to freeze graph with. Use only when
graph cannot be loaded into TensorFlow and when `output_tensors` is None.
(default None)
enable_mlir_converter: Enables MLIR-based conversion instead of TOCO
conversion.
*args: See `build_toco_convert_protos`,
**kwargs: See `build_toco_convert_protos`.
Returns:
The converted data. For example if TFLite was the destination, then
this will be a tflite flatbuffer in a bytes array.
Raises:
Defined in `build_toco_convert_protos`. |
608 | 606 | toco_convert_impl | tensorflow/tensorflow/lite/python/convert.py | 541 | function | "Convert a model using TOCO.
Typically this function is used to convert from TensorFlow GraphDef to TFLite.
Conversion can be customized by providing arguments that are forwarded to
`build_toco_convert_protos` (see documentation for details).
Args:
input_data: Input data (i.e. often `sess.graph_def`),
input_tensors: List of input tensors. Type and shape are computed using
`foo.shape` and `foo.dtype`.
output_tensors: List of output tensors (only .name is used from this).
enable_mlir_converter: Enables MLIR-based conversion instead of TOCO
conversion.
*args: See `build_toco_convert_protos`,
**kwargs: See `build_toco_convert_protos`.
Returns:
The converted data. For example if TFLite was the destination, then
this will be a tflite flatbuffer in a bytes array.
Raises:
Defined in `build_toco_convert_protos`. |
609 | 607 | toco_convert | tensorflow/tensorflow/lite/python/convert.py | 580 | function | Convert a model using TOCO.
Typically this function is used to convert from TensorFlow GraphDef to TFLite.
Conversion can be customized by providing arguments that are forwarded to
`build_toco_convert_protos` (see documentation for details). This function has
been deprecated. Please use `lite.TFLiteConverter` instead.
Args:
input_data: Input data (i.e. often `sess.graph_def`),
input_tensors: List of input tensors. Type and shape are computed using
`foo.shape` and `foo.dtype`.
output_tensors: List of output tensors (only .name is used from this).
*args: See `build_toco_convert_protos`,
**kwargs: See `build_toco_convert_protos`.
Returns:
The converted data. For example if TFLite was the destination, then
this will be a tflite flatbuffer in a bytes array.
Raises:
Defined in `build_toco_convert_protos`. |
610 | 608 | run_main | tensorflow/tensorflow/lite/python/convert_file_to_c_source.py | 29 | function | Main in convert_file_to_c_source.py. |
611 | 609 | main | tensorflow/tensorflow/lite/python/convert_file_to_c_source.py | 101 | function | |
612 | 610 | _log_tensor_details | tensorflow/tensorflow/lite/python/convert_saved_model.py | 30 | function | Log tensor details: name, shape, and type. |
613 | 611 | get_meta_graph_def | tensorflow/tensorflow/lite/python/convert_saved_model.py | 46 | function | Validate saved_model and extract MetaGraphDef.
Args:
saved_model_dir: saved_model path to convert.
tag_set: Set of tag(s) of the MetaGraphDef to load.
Returns:
The meta_graph_def used for tflite conversion.
Raises:
ValueError: No valid MetaGraphDef for given tag_set. |
614 | 612 | get_signature_def | tensorflow/tensorflow/lite/python/convert_saved_model.py | 63 | function | Get the signature def from meta_graph with given signature_key.
Args:
meta_graph: meta_graph_def.
signature_key: signature_def in the meta_graph_def.
Returns:
The signature_def used for tflite conversion.
Raises:
ValueError: Given signature_key is not valid for this meta_graph. |
615 | 613 | get_inputs_outputs | tensorflow/tensorflow/lite/python/convert_saved_model.py | 88 | function | Get inputs and outputs from SignatureDef.
Args:
signature_def: SignatureDef in the meta_graph_def for conversion.
Returns:
The inputs and outputs in the graph for conversion. |
616 | 614 | _get_tensors | tensorflow/tensorflow/lite/python/convert_saved_model.py | 112 | function | Gets the tensors associated with the tensor names.
Either signature_def_tensor_names or user_tensor_names should be provided. If
the user provides tensors, the tensors associated with the user provided
tensor names are provided. Otherwise, the tensors associated with the names in
the SignatureDef are provided.
Args:
graph: GraphDef representing graph.
signature_def_tensor_names: Tensor names stored in either the inputs or
outputs of a SignatureDef. (default None)
user_tensor_names: Tensor names provided by the user. (default None)
Returns:
List of tensors.
Raises:
ValueError:
signature_def_tensors and user_tensor_names are undefined or empty.
user_tensor_names are not valid. |
617 | 615 | freeze_saved_model | tensorflow/tensorflow/lite/python/convert_saved_model.py | 155 | function | Converts a SavedModel to a frozen graph.
Args:
saved_model_dir: SavedModel directory to convert.
input_arrays: List of input tensors to freeze graph with. Uses input arrays
from SignatureDef when none are provided.
input_shapes: Dict of strings representing input tensor names to list of
integers representing input shapes (e.g., {"foo": : [1, 16, 16, 3]}).
Automatically determined when input shapes is None (e.g., {"foo" : None}).
output_arrays: List of output tensors to freeze graph with. Uses output
arrays from SignatureDef when none are provided.
tag_set: Set of tags identifying the MetaGraphDef within the SavedModel to
analyze. All tags in the tag set must be present.
signature_key: Key identifying SignatureDef containing inputs and outputs.
Returns:
frozen_graph_def: Frozen GraphDef.
in_tensors: List of input tensors for the graph.
out_tensors: List of output tensors for the graph.
graph: `Graph` object.
Raises:
ValueError:
SavedModel doesn't contain a MetaGraphDef identified by tag_set.
signature_key is not in the MetaGraphDef.
assets/ directory is in the MetaGraphDef.
input_shapes does not match the length of input_arrays.
input_arrays or output_arrays are not valid. |
618 | 616 | FreezeSavedModelTest | tensorflow/tensorflow/lite/python/convert_saved_model_test.py | 40 | class | |
619 | 617 | ConvertTest | tensorflow/tensorflow/lite/python/convert_test.py | 38 | class | |
620 | 618 | ConvertTestOpHint | tensorflow/tensorflow/lite/python/convert_test.py | 168 | class | Test the hint to stub functionality. |
621 | 619 | _tf_export | tensorflow/tensorflow/lite/python/interpreter.py | 37 | function | |
622 | 620 | Delegate | tensorflow/tensorflow/lite/python/interpreter.py | 42 | class | Python wrapper class to manage TfLiteDelegate objects.
The shared library is expected to have two functions:
TfLiteDelegate* tflite_plugin_create_delegate(
char**, char**, size_t, void (*report_error)(const char *))
void tflite_plugin_destroy_delegate(TfLiteDelegate*)
The first one creates a delegate object. It may return NULL to indicate an
error (with a suitable error message reported by calling report_error()).
The second one destroys delegate object and must be called for every
created delegate object. Passing NULL as argument value is allowed, i.e.
tflite_plugin_destroy_delegate(tflite_plugin_create_delegate(...))
always works. |
623 | 621 | load_delegate | tensorflow/tensorflow/lite/python/interpreter.py | 132 | function | Returns loaded Delegate object.
Args:
library: Name of shared library containing the
[TfLiteDelegate](https://www.tensorflow.org/lite/performance/delegates).
options: Dictionary of options that are required to load the delegate. All
keys and values in the dictionary should be convertible to str. Consult
the documentation of the specific delegate for required and legal options.
(default None)
Returns:
Delegate object.
Raises:
ValueError: Delegate failed to load.
RuntimeError: If delegate loading is used on unsupported platform. |
624 | 622 | Interpreter | tensorflow/tensorflow/lite/python/interpreter.py | 159 | class | Interpreter interface for TensorFlow Lite Models.
This makes the TensorFlow Lite interpreter accessible in Python.
It is possible to use this interpreter in a multithreaded Python environment,
but you must be sure to call functions of a particular instance from only
one thread at a time. So if you want to have 4 threads running different
inferences simultaneously, create an interpreter for each one as thread-local
data. Similarly, if you are calling invoke() in one thread on a single
interpreter but you want to use tensor() on another thread once it is done,
you must use a synchronization primitive between the threads to ensure invoke
has returned before calling tensor(). |
625 | 623 | InterpreterWithCustomOps | tensorflow/tensorflow/lite/python/interpreter.py | 552 | class | Interpreter interface for TensorFlow Lite Models that accepts custom ops.
The interface provided by this class is experimental and therefore not exposed
as part of the public API.
Wraps the tf.lite.Interpreter class and adds the ability to load custom ops
by providing the names of functions that take a pointer to a BuiltinOpResolver
and add a custom op. |
626 | 624 | InterpreterCustomOpsTest | tensorflow/tensorflow/lite/python/interpreter_test.py | 43 | class | |
627 | 625 | InterpreterTest | tensorflow/tensorflow/lite/python/interpreter_test.py | 63 | class | |
628 | 626 | InterpreterTestErrorPropagation | tensorflow/tensorflow/lite/python/interpreter_test.py | 260 | class | |
629 | 627 | InterpreterTensorAccessorTest | tensorflow/tensorflow/lite/python/interpreter_test.py | 298 | class | |
630 | 628 | InterpreterDelegateTest | tensorflow/tensorflow/lite/python/interpreter_test.py | 353 | class | |
631 | 629 | Optimize | tensorflow/tensorflow/lite/python/lite.py | 88 | class | Enum defining the optimizations to apply when generating tflite graphs.
Some optimizations may come at the cost of accuracy.
DEFAULT
Default optimization strategy.
Converter will do its best to improve size and latency based on the
information provided.
Enhanced optimizations are gained by providing a representative_dataset.
This is recommended, and is currently equivalent to the modes below.
Currently, weights will be quantized and if representative_dataset is
provided, activations for quantizable operations will also be quantized.
OPTIMIZE_FOR_SIZE
Deprecated. Does the same as DEFAULT.
OPTIMIZE_FOR_LATENCY
Deprecated. Does the same as DEFAULT. |
632 | 630 | RepresentativeDataset | tensorflow/tensorflow/lite/python/lite.py | 131 | class | Representative dataset to evaluate optimizations.
A representative dataset that can be used to evaluate optimizations by the
converter. E.g. converter can use these examples to estimate (min, max) ranges
by calibrating the model on inputs. This can allow converter to quantize a
converted floating point model. |
633 | 631 | TargetSpec | tensorflow/tensorflow/lite/python/lite.py | 153 | class | Specification of target device.
Details about target device. Converter optimizes the generated model for
specific device.
Attributes:
supported_ops: Experimental flag, subject to change. Set of OpsSet options
supported by the device. (default set([OpsSet.TFLITE_BUILTINS]))
supported_types: List of types for constant values on the target device.
Supported values are types exported by lite.constants. Frequently, an
optimization choice is driven by the most compact (i.e. smallest) type in
this list (default [constants.FLOAT]) |
634 | 632 | QuantizationMode | tensorflow/tensorflow/lite/python/lite.py | 177 | class | QuantizationMode determines the quantized conversion from user options. |
635 | 633 | TFLiteConverterBase | tensorflow/tensorflow/lite/python/lite.py | 384 | class | Converter subclass to share functionality between V1 and V2 converters. |
636 | 634 | TFLiteConverterBaseV2 | tensorflow/tensorflow/lite/python/lite.py | 522 | class | Converter subclass to share functionality between V2 converters.
Attributes:
allow_custom_ops: Boolean indicating whether to allow custom operations.
When False, any unknown operation is an error. When True, custom ops are
created for any op that is unknown. The developer needs to provide these
to the TensorFlow Lite runtime with a custom resolver. (default False)
optimizations: Experimental flag, subject to change. A list of optimizations
to apply when converting the model. E.g. `[Optimize.DEFAULT]`
representative_dataset: A representative dataset that can be used to
generate input and output samples for the model. The converter can use the
dataset to evaluate different optimizations. Note that this is an optional
attribute but it is necessary if INT8 is the only support builtin ops in
target ops.
target_spec: Experimental flag, subject to change. Specification of target
device.
inference_input_type: Data type of the input layer. Note that integer types
(tf.int8 and tf.uint8) are currently only supported for post training
integer quantization. (default tf.float32, must be in {tf.float32,
tf.int8, tf.uint8})
inference_output_type: Data type of the output layer. Note that integer
types (tf.int8 and tf.uint8) are currently only supported for post
training integer quantization. (default tf.float32, must be in
{tf.float32, tf.int8, tf.uint8})
experimental_new_converter: Experimental flag, subject to change. Enables
MLIR-based conversion instead of TOCO conversion. (default True) |
637 | 635 | TFLiteSavedModelConverterV2 | tensorflow/tensorflow/lite/python/lite.py | 652 | class | Converts the given SavedModel into TensorFlow Lite model.
Attributes:
saved_model_dir: Directory of the SavedModel. |
638 | 636 | TFLiteKerasModelConverterV2 | tensorflow/tensorflow/lite/python/lite.py | 719 | class | Converts the given Keras model into TensorFlow Lite model. |
639 | 637 | TFLiteFrozenGraphConverterV2 | tensorflow/tensorflow/lite/python/lite.py | 840 | class | Converts the given frozen graph into TensorFlow Lite model. |
640 | 638 | TFLiteConverterV2 | tensorflow/tensorflow/lite/python/lite.py | 910 | class | Converts a TensorFlow model into TensorFlow Lite model.
Attributes:
allow_custom_ops: Boolean indicating whether to allow custom operations.
When False, any unknown operation is an error. When True, custom ops are
created for any op that is unknown. The developer needs to provide these
to the TensorFlow Lite runtime with a custom resolver. (default False)
optimizations: Experimental flag, subject to change. A list of optimizations
to apply when converting the model. E.g. `[Optimize.DEFAULT]`
representative_dataset: A representative dataset that can be used to
generate input and output samples for the model. The converter can use the
dataset to evaluate different optimizations. Note that this is an optional
attribute but it is necessary if INT8 is the only support builtin ops in
target ops.
target_spec: Experimental flag, subject to change. Specification of target
device.
inference_input_type: Data type of the input layer. Note that integer types
(tf.int8 and tf.uint8) are currently only supported for post training
integer quantization. (default tf.float32, must be in {tf.float32,
tf.int8, tf.uint8})
inference_output_type: Data type of the output layer. Note that integer
types (tf.int8 and tf.uint8) are currently only supported for post
training integer quantization. (default tf.float32, must be in
{tf.float32, tf.int8, tf.uint8})
experimental_new_converter: Experimental flag, subject to change. Enables
MLIR-based conversion instead of TOCO conversion. (default True)
Example usage:
```python
# Converting a SavedModel to a TensorFlow Lite model.
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
# Converting a tf.Keras model to a TensorFlow Lite model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Converting ConcreteFunctions to a TensorFlow Lite model.
converter = tf.lite.TFLiteConverter.from_concrete_functions([func])
tflite_model = converter.convert()
``` |
641 | 639 | TFLiteConverterBaseV1 | tensorflow/tensorflow/lite/python/lite.py | 1085 | class | Converter subclass to share functionality between V1 converters.
Attributes:
inference_type: Target data type of real-number arrays in the output file.
Must be `{tf.float32, tf.uint8}`. If `optimzations` are provided, this
parameter is ignored. (default tf.float32)
inference_input_type: Target data type of real-number input arrays. Allows
for a different type for input arrays. If an integer type is provided and
`optimizations` are not used, `quantized_inputs_stats` must be provided.
If `inference_type` is tf.uint8, signaling conversion to a fully quantized
model from a quantization-aware trained input model, then
`inference_input_type` defaults to tf.uint8. In all other cases,
`inference_input_type` defaults to tf.float32. Must be `{tf.float32,
tf.uint8, tf.int8}`
inference_output_type: Target data type of real-number output arrays. Allows
for a different type for output arrays. If `inference_type` is tf.uint8,
signaling conversion to a fully quantized model from a quantization-aware
trained output model, then `inference_output_type` defaults to tf.uint8.
In all other cases, `inference_output_type` must be tf.float32, an error
will be thrown otherwise. Must be `{tf.float32, tf.uint8, tf.int8}`
output_format: Output file format. Currently must be `{TFLITE,
GRAPHVIZ_DOT}`. (default TFLITE)
quantized_input_stats: Dict of strings representing input tensor names
mapped to tuple of floats representing the mean and standard deviation
of the training data (e.g., {"foo" : (0., 1.)}). Only need if
`inference_input_type` is `QUANTIZED_UINT8`. real_input_value =
(quantized_input_value - mean_value) / std_dev_value. (default {})
default_ranges_stats: Tuple of integers representing (min, max) range values
for all arrays without a specified range. Intended for experimenting with
quantization via "dummy quantization". (default None)
drop_control_dependency: Boolean indicating whether to drop control
dependencies silently. This is due to TFLite not supporting control
dependencies. (default True)
reorder_across_fake_quant: Boolean indicating whether to reorder FakeQuant
nodes in unexpected locations. Used when the location of the FakeQuant
nodes is preventing graph transformations necessary to convert the graph.
Results in a graph that differs from the quantized training graph,
potentially causing differing arithmetic behavior. (default False)
change_concat_input_ranges: Boolean to change behavior of min/max ranges for
inputs and outputs of the concat operator for quantized models. Changes
the ranges of concat operator overlap when true. (default False)
allow_custom_ops: Boolean indicating whether to allow custom operations.
When false any unknown operation is an error. When true, custom ops are
created for any op that is unknown. The developer will need to provide
these to the TensorFlow Lite runtime with a custom resolver. (default
False)
post_training_quantize: Deprecated. Please specify `[Optimize.DEFAULT]` for
`optimizations` instead. Boolean indicating whether to quantize the
weights of the converted float model. Model size will be reduced and
there will be latency improvements (at the cost of accuracy). (default
False)
dump_graphviz_dir: Full filepath of folder to dump the graphs at various
stages of processing GraphViz .dot files. Preferred over
--output_format=GRAPHVIZ_DOT in order to keep the requirements of the
output file. (default None)
dump_graphviz_video: Boolean indicating whether to dump the graph after
every graph transformation. (default False)
conversion_summary_dir: A string indicating the path to the generated
conversion logs.
target_ops: Deprecated. Please specify `target_spec.supported_ops` instead.
Set of OpsSet options indicating which converter to use. (default
set([OpsSet.TFLITE_BUILTINS]))
target_spec: Experimental flag, subject to change. Specification of target
device.
optimizations: Experimental flag, subject to change. A list of optimizations
to apply when converting the model. E.g. `[Optimize.DEFAULT]`
representative_dataset: A representative dataset that can be used to
generate input and output samples for the model. The converter can use the
dataset to evaluate different optimizations.
experimental_new_converter: Experimental flag, subject to change. Enables
MLIR-based conversion instead of TOCO conversion. (default True) |
642 | 640 | TFLiteSavedModelConverter | tensorflow/tensorflow/lite/python/lite.py | 1410 | class | Converts the given SavedModel into TensorFlow Lite model.
Attributes:
saved_model_dir: Directory of the SavedModel. |
643 | 641 | TFLiteKerasModelConverter | tensorflow/tensorflow/lite/python/lite.py | 1458 | class | Converts the given SavedModel into TensorFlow Lite model. |
644 | 642 | TFLiteFrozenGraphConverter | tensorflow/tensorflow/lite/python/lite.py | 1586 | class | Converts the given frozen graph def into TensorFlow Lite model. |
645 | 643 | TFLiteConverter | tensorflow/tensorflow/lite/python/lite.py | 1634 | class | Convert a TensorFlow model into `output_format`.
This is used to convert from a TensorFlow GraphDef, SavedModel or tf.keras
model into either a TFLite FlatBuffer or graph visualization.
Attributes:
inference_type: Target data type of real-number arrays in the output file.
Must be `{tf.float32, tf.uint8}`. If `optimzations` are provided, this
parameter is ignored. (default tf.float32)
inference_input_type: Target data type of real-number input arrays. Allows
for a different type for input arrays.
If an integer type is provided and `optimizations` are not used,
`quantized_inputs_stats` must be provided.
If `inference_type` is tf.uint8, signaling conversion to a fully quantized
model from a quantization-aware trained input model, then
`inference_input_type` defaults to tf.uint8.
In all other cases, `inference_input_type` defaults to tf.float32.
Must be `{tf.float32, tf.uint8, tf.int8}`
inference_output_type: Target data type of real-number output arrays. Allows
for a different type for output arrays.
If `inference_type` is tf.uint8, signaling conversion to a fully quantized
model from a quantization-aware trained output model, then
`inference_output_type` defaults to tf.uint8.
In all other cases, `inference_output_type` must be tf.float32, an error
will be thrown otherwise.
Must be `{tf.float32, tf.uint8, tf.int8}`
output_format: Output file format. Currently must be `{TFLITE,
GRAPHVIZ_DOT}`. (default TFLITE)
quantized_input_stats: Dict of strings representing input tensor names
mapped to tuple of floats representing the mean and standard deviation
of the training data (e.g., {"foo" : (0., 1.)}). Only need if
`inference_input_type` is `QUANTIZED_UINT8`.
real_input_value = (quantized_input_value - mean_value) / std_dev_value.
(default {})
default_ranges_stats: Tuple of integers representing (min, max) range values
for all arrays without a specified range. Intended for experimenting with
quantization via "dummy quantization". (default None)
drop_control_dependency: Boolean indicating whether to drop control
dependencies silently. This is due to TFLite not supporting control
dependencies. (default True)
reorder_across_fake_quant: Boolean indicating whether to reorder FakeQuant
nodes in unexpected locations. Used when the location of the FakeQuant
nodes is preventing graph transformations necessary to convert the graph.
Results in a graph that differs from the quantized training graph,
potentially causing differing arithmetic behavior. (default False)
change_concat_input_ranges: Boolean to change behavior of min/max ranges for
inputs and outputs of the concat operator for quantized models. Changes
the ranges of concat operator overlap when true. (default False)
allow_custom_ops: Boolean indicating whether to allow custom operations.
When false any unknown operation is an error. When true, custom ops are
created for any op that is unknown. The developer will need to provide
these to the TensorFlow Lite runtime with a custom resolver.
(default False)
post_training_quantize: Deprecated. Please specify `[Optimize.DEFAULT]` for
`optimizations` instead. Boolean indicating whether to quantize the
weights of the converted float model. Model size will be reduced and
there will be latency improvements (at the cost of accuracy).
(default False)
dump_graphviz_dir: Full filepath of folder to dump the graphs at various
stages of processing GraphViz .dot files. Preferred over
--output_format=GRAPHVIZ_DOT in order to keep the requirements of the
output file. (default None)
dump_graphviz_video: Boolean indicating whether to dump the graph after
every graph transformation. (default False)
conversion_summary_dir: A string indicating the path to the generated
conversion logs.
target_ops: Deprecated. Please specify `target_spec.supported_ops` instead.
Set of OpsSet options indicating which converter to use.
(default set([OpsSet.TFLITE_BUILTINS]))
target_spec: Experimental flag, subject to change. Specification of target
device.
optimizations: Experimental flag, subject to change. A list of optimizations
to apply when converting the model. E.g. `[Optimize.DEFAULT]`
representative_dataset: A representative dataset that can be used to
generate input and output samples for the model. The converter can use
the dataset to evaluate different optimizations.
experimental_new_converter: Experimental flag, subject to change.
Enables MLIR-based conversion instead of TOCO conversion. (default True)
Example usage:
```python
# Converting a GraphDef from session.
converter = tf.compat.v1.TFLiteConverter.from_session(
sess, in_tensors, out_tensors)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
# Converting a GraphDef from file.
converter = tf.compat.v1.TFLiteConverter.from_frozen_graph(
graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
# Converting a SavedModel.
converter = tf.compat.v1.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
# Converting a tf.keras model.
converter = tf.compat.v1.TFLiteConverter.from_keras_model_file(keras_model)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
``` |
646 | 644 | TocoConverter | tensorflow/tensorflow/lite/python/lite.py | 1979 | class | Convert a TensorFlow model into `output_format` using TOCO.
This class has been deprecated. Please use `lite.TFLiteConverter` instead. |
647 | 645 | FromSessionTest | tensorflow/tensorflow/lite/python/lite_flex_test.py | 38 | class | |
648 | 646 | FromConcreteFunctionTest | tensorflow/tensorflow/lite/python/lite_flex_test.py | 103 | class | |
649 | 647 | LiteTest | tensorflow/tensorflow/lite/python/lite_test.py | 59 | class | Base class of all the tests in this module. |
650 | 648 | TestModels | tensorflow/tensorflow/lite/python/lite_test.py | 63 | class | |
651 | 649 | FromConstructor | tensorflow/tensorflow/lite/python/lite_test.py | 76 | class | |
652 | 650 | FromSessionTest | tensorflow/tensorflow/lite/python/lite_test.py | 115 | class | |
653 | 651 | FromFrozenGraphFile | tensorflow/tensorflow/lite/python/lite_test.py | 1464 | class | |
654 | 652 | FromFrozenGraphObjectDetection | tensorflow/tensorflow/lite/python/lite_test.py | 1650 | class | |
655 | 653 | FromSavedModelTest | tensorflow/tensorflow/lite/python/lite_test.py | 1713 | class | |
656 | 654 | MyAddLayer | tensorflow/tensorflow/lite/python/lite_test.py | 1897 | class | |
657 | 655 | FromKerasFile | tensorflow/tensorflow/lite/python/lite_test.py | 1912 | class | |
658 | 656 | GrapplerTest | tensorflow/tensorflow/lite/python/lite_test.py | 2292 | class | |
659 | 657 | ImportOpsUtilTest | tensorflow/tensorflow/lite/python/lite_test.py | 2384 | class | |
660 | 658 | DefaultConverterAttrsTest | tensorflow/tensorflow/lite/python/lite_test.py | 2390 | class | |
661 | 659 | FromConcreteFunctionTest | tensorflow/tensorflow/lite/python/lite_v2_test.py | 48 | class | |
662 | 660 | FromSavedModelTest | tensorflow/tensorflow/lite/python/lite_v2_test.py | 498 | class | |
663 | 661 | FromKerasModelTest | tensorflow/tensorflow/lite/python/lite_v2_test.py | 709 | class | |
664 | 662 | ControlFlowTest | tensorflow/tensorflow/lite/python/lite_v2_test.py | 825 | class | |
665 | 663 | GrapplerTest | tensorflow/tensorflow/lite/python/lite_v2_test.py | 1013 | class | |
666 | 664 | UnknownShapes | tensorflow/tensorflow/lite/python/lite_v2_test.py | 1047 | class | |
667 | 665 | ModelTest | tensorflow/tensorflow/lite/python/lite_v2_test_util.py | 34 | class | Base test class for TensorFlow Lite 2.x model tests. |
668 | 666 | OpHint | tensorflow/tensorflow/lite/python/op_hint.py | 97 | class | A class that helps build tflite function invocations.
It allows you to take a bunch of TensorFlow ops and annotate the construction
such that toco knows how to convert it to tflite. This embeds a pseudo
function in a TensorFlow graph. This allows embedding high-level API usage
information in a lower level TensorFlow implementation so that an alternative
implementation can be substituted later.
Essentially, any "input" into this pseudo op is fed into an identity, and
attributes are added to that input before being used by the constituent ops
that make up the pseudo op. A similar process is done to any output that
is to be exported from the current op. |
669 | 667 | _LiteOperand | tensorflow/tensorflow/lite/python/op_hint.py | 471 | class | Abstract operand for a tflite hint function._dynamic_rnn_loop.
This is a base class that handles representing arguments to an OpHint.
It also is able to serialize operands to the stubbed graph_def.
Child classes are responsible for being able to
store information about the hint identity operators. They are also responsible
for knowing how to serialize to output graphdefs.
Typically this will be implemented by holding one or more identity nodes
that were previously discovered as hints. |
670 | 668 | _LiteSingleOperand | tensorflow/tensorflow/lite/python/op_hint.py | 518 | class | A simple operand that is non-aggregated (i.e. most hints). |
671 | 669 | _LiteAggregateOperand | tensorflow/tensorflow/lite/python/op_hint.py | 544 | class | An operand for a tflite hint function that is aggregated from many.
For example, an LSTM is a grid of operators that are all related. Inputs
going into them may need to be fused, so they should all be tracked as
related arguments. |
672 | 670 | _LiteFuncCall | tensorflow/tensorflow/lite/python/op_hint.py | 670 | class | Represent a TensorFlow Lite custom function.
This is uses to accumulate found hints in the graphdef into a single
conceptual unit.
Attributes:
inputs: inputs to the op (hash from index # to argument)
outputs: outputs to the op (hash from index # to argument)
function_name: the tflite custom op name to use
uuid: a unique call id for this particular call (i.e. multiple function
calls would have the same function_name but different uuids.
params: A param name to key value for op constant data. I.e. for axis on a
reduction, strides on a convolution, etc.
level: Level of the OpHint.
children_inputs_mappings: If the Ophint has children, children inputs
mappings indicate how their inputs & outputs are mapped. |
673 | 671 | _find_all_hints_in_nodes | tensorflow/tensorflow/lite/python/op_hint.py | 730 | function | Look at the all the input nodes and return a list of LiteFuncCall objs.
Args:
nodes: A TensorFlow graph_def to look for LiteFuncCalls.
Returns:
a list of `LifeFuncCall` objects in the form |
674 | 672 | _extract_topology_sequence_mapping | tensorflow/tensorflow/lite/python/op_hint.py | 795 | function | |
675 | 673 | _find_children_hints_in_while_loop | tensorflow/tensorflow/lite/python/op_hint.py | 800 | function | Find children hints and all nodes inside the while loop.
Args:
function_def: Function def of the while loop.
nodes_mapping: While loop input_arg : real node name.
Returns:
Ordered children hints and all re-mapped nodes inside the while loop. |
676 | 674 | _find_children_hints | tensorflow/tensorflow/lite/python/op_hint.py | 833 | function | Find all children hints.
For a given OpHint, we find all children hints inside it, we also copy all the
nodes inside function defs (if applicable) to the original graph_def, they are
returned in a list as well.
Args:
call: Parent OpHint that contains children ophints.
graph_def: Original graph def.
Returns:
Ordered children hints inside the parent ophint; new graph def that contains
nodes inside function defs (if applicable); nodes inside function defs. |
677 | 675 | _tensor_name_base | tensorflow/tensorflow/lite/python/op_hint.py | 887 | function | Removes the device assignment code from a tensor.
e.g. _tensor_name_base("foo:3") => "foo"
Args:
full_tensor_name: A tensor name that is annotated with a device placement
(this is what tensor flow introspection gives).
Returns:
A name without any device assignment. |
678 | 676 | _tensorflow_output_name | tensorflow/tensorflow/lite/python/op_hint.py | 904 | function | |
679 | 677 | _check_subgraph_closed | tensorflow/tensorflow/lite/python/op_hint.py | 910 | function | Checks to make sure node only connects to predecessor graph through inputs.
Args:
n: Node to check
reachable_by_input: Nodes that are reachable by all inputs of subgraph
input_nodes_set: The set of nodes that are "inputs".
name_to_input_name: Maps from name to the list of inputs.
Raises:
TypeError: If the given node uses items past inputs directly. |
680 | 678 | _convert_single_op_hint_to_stub | tensorflow/tensorflow/lite/python/op_hint.py | 940 | function | Given a graph_def, converts `call` into a stub and returns a new graph_def.
Args:
call: A single function call to be converted.
graph_def: A graph_def to use as input (that has call obviously).
function_def_nodes: Nodes inside the function def those are not connected to
the graph.
is_last_run: Whether it is the last run for a given pass (for OpHint has
children).
Returns:
A new transformed graph-def that has call as a stub (single op).
Note: after this process, the graph_def can no longer be loaded into
the tensorflow runtime, so all future manipulations are done in graph_def
level. |
681 | 679 | _remove_one_redundant_stack_unstack | tensorflow/tensorflow/lite/python/op_hint.py | 1070 | function | Removes a stack->unstack pattern from in_graph_def in a returned graph.
Args:
in_graph_def: Graph def to use as input.
Returns:
Simplified tuple (graph_def, changed_something) where changed_something
is true if anything was done. |
682 | 680 | _remove_redundant_stack_unstack | tensorflow/tensorflow/lite/python/op_hint.py | 1161 | function | |
683 | 681 | _get_correct_mapping | tensorflow/tensorflow/lite/python/op_hint.py | 1170 | function | |
684 | 682 | _convert_op_hints_to_stubs_helper | tensorflow/tensorflow/lite/python/op_hint.py | 1180 | function | Converts a graph_def to a new graph_def where all op hints are stubbed.
Args:
graph_def: A graph def that we should convert.
write_callback: A function pointer that can be used to write intermediate
steps of graph transformation (optional).
Returns:
A new stubbed graph_def. |
685 | 683 | find_all_hinted_output_nodes | tensorflow/tensorflow/lite/python/op_hint.py | 1257 | function | Find all Ophints output nodes in the graph.
This is used to get all the output nodes those are ophinted, it is important
for operation like convert_variables_to_constants keep all ophints structure.
Note: only one of session or graph_def should be used, not both.
Why this can be useful? Some TensorFlow ops (e.g. bidirectional rnn), can
generate multiple outputs for unfused subgraph. If not all output nodes are
consumed, graph optimization can potentially drop the unused nodes and cause
ophints in an invalid states (due to missing ophinted output nodes). So it's
important for us to find all those hinted output nodes and make sure they're
not discarded away.
Args:
session: A TensorFlow session that contains the graph to convert.
graph_def: A graph def that we should convert.
Returns:
A list of OpHints output nodes.
Raises:
ValueError: If both session and graph_def are provided. |
686 | 684 | is_ophint_converted | tensorflow/tensorflow/lite/python/op_hint.py | 1292 | function | |
687 | 685 | convert_op_hints_to_stubs | tensorflow/tensorflow/lite/python/op_hint.py | 1305 | function | Converts a graphdef with LiteOp hints into stub operations.
This is used to prepare for toco conversion of complex intrinsic usages.
Note: only one of session or graph_def should be used, not both.
Args:
session: A TensorFlow session that contains the graph to convert.
graph_def: A graph def that we should convert.
write_callback: A function pointer that can be used to write intermediate
steps of graph transformation (optional).
Returns:
A new graphdef with all ops contained in OpHints being replaced by
a single op call with the right parameters.
Raises:
ValueError: If both session and graph_def are provided. |
688 | 686 | _parse_array | tensorflow/tensorflow/lite/python/tflite_convert.py | 39 | function | |
689 | 687 | _parse_set | tensorflow/tensorflow/lite/python/tflite_convert.py | 45 | function | |
690 | 688 | _parse_inference_type | tensorflow/tensorflow/lite/python/tflite_convert.py | 51 | function | Converts the inference type to the value of the constant.
Args:
value: str representing the inference type.
flag: str representing the flag name.
Returns:
tf.dtype.
Raises:
ValueError: Unsupported value. |
691 | 689 | _get_tflite_converter | tensorflow/tensorflow/lite/python/tflite_convert.py | 74 | function | Makes a TFLiteConverter object based on the flags provided.
Args:
flags: argparse.Namespace object containing TFLite flags.
Returns:
TFLiteConverter object.
Raises:
ValueError: Invalid flags. |
692 | 690 | _convert_tf1_model | tensorflow/tensorflow/lite/python/tflite_convert.py | 122 | function | Calls function to convert the TensorFlow 1.X model into a TFLite model.
Args:
flags: argparse.Namespace object.
Raises:
ValueError: Invalid flags. |
693 | 691 | _convert_tf2_model | tensorflow/tensorflow/lite/python/tflite_convert.py | 219 | function | Calls function to convert the TensorFlow 2.0 model into a TFLite model.
Args:
flags: argparse.Namespace object.
Raises:
ValueError: Unsupported file format. |
694 | 692 | _check_tf1_flags | tensorflow/tensorflow/lite/python/tflite_convert.py | 244 | function | Checks the parsed and unparsed flags to ensure they are valid in 1.X.
Raises an error if previously support unparsed flags are found. Raises an
error for parsed flags that don't meet the required conditions.
Args:
flags: argparse.Namespace object containing TFLite flags.
unparsed: List of unparsed flags.
Raises:
ValueError: Invalid flags. |
695 | 693 | _check_tf2_flags | tensorflow/tensorflow/lite/python/tflite_convert.py | 313 | function | Checks the parsed and unparsed flags to ensure they are valid in 2.X.
Args:
flags: argparse.Namespace object containing TFLite flags.
Raises:
ValueError: Invalid flags. |
696 | 694 | _get_tf1_flags | tensorflow/tensorflow/lite/python/tflite_convert.py | 327 | function | Returns ArgumentParser for tflite_convert for TensorFlow 1.X.
Args:
parser: ArgumentParser |
697 | 695 | _get_tf2_flags | tensorflow/tensorflow/lite/python/tflite_convert.py | 511 | function | Returns ArgumentParser for tflite_convert for TensorFlow 2.0.
Args:
parser: ArgumentParser |
698 | 696 | _ParseExperimentalNewConverter | tensorflow/tensorflow/lite/python/tflite_convert.py | 535 | class | Helper class to parse --experimental_new_converter argument. |
699 | 697 | _get_parser | tensorflow/tensorflow/lite/python/tflite_convert.py | 565 | function | Returns an ArgumentParser for tflite_convert.
Args:
use_v2_converter: Indicates which converter to return.
Return: ArgumentParser. |
700 | 698 | run_main | tensorflow/tensorflow/lite/python/tflite_convert.py | 596 | function | Main in tflite_convert.py. |
701 | 699 | main | tensorflow/tensorflow/lite/python/tflite_convert.py | 639 | function | |
702 | 700 | TestModels | tensorflow/tensorflow/lite/python/tflite_convert_test.py | 45 | class | |
703 | 701 | TfLiteConvertV1Test | tensorflow/tensorflow/lite/python/tflite_convert_test.py | 81 | class | |
704 | 702 | TfLiteConvertV2Test | tensorflow/tensorflow/lite/python/tflite_convert_test.py | 298 | class | |
705 | 703 | ArgParserTest | tensorflow/tensorflow/lite/python/tflite_convert_test.py | 339 | class | |
706 | 704 | convert_dtype_to_tflite_type | tensorflow/tensorflow/lite/python/util.py | 59 | function | Converts tf.dtype to TFLite proto type.
Args:
tf_dtype: tf.dtype
Raises:
ValueError: Unsupported tf.dtype.
Returns:
types_flag_pb2. |
707 | 705 | get_tensor_name | tensorflow/tensorflow/lite/python/util.py | 77 | function | Returns name of the input tensor.
Args:
tensor: tf.Tensor
Returns:
str |
708 | 706 | get_tensors_from_tensor_names | tensorflow/tensorflow/lite/python/util.py | 98 | function | Gets the Tensors associated with the `tensor_names` in the provided graph.
Args:
graph: TensorFlow Graph.
tensor_names: List of strings that represent names of tensors in the graph.
Returns:
A list of Tensor objects in the same order the names are provided.
Raises:
ValueError:
tensor_names contains an invalid tensor name. |
709 | 707 | set_tensor_shapes | tensorflow/tensorflow/lite/python/util.py | 141 | function | Sets Tensor shape for each tensor if the shape is defined.
Args:
tensors: TensorFlow ops.Tensor.
shapes: Dict of strings representing input tensor names to list of
integers representing input shapes (e.g., {"foo": : [1, 16, 16, 3]}).
Raises:
ValueError:
`shapes` contains an invalid tensor.
`shapes` contains an invalid shape for a valid tensor. |
710 | 708 | get_grappler_config | tensorflow/tensorflow/lite/python/util.py | 172 | function | Creates a tf.compat.v1.ConfigProto for configuring Grappler.
Args:
optimizers_list: List of strings that represents the list of optimizers.
Returns:
tf.ConfigProto. |
711 | 709 | run_graph_optimizations | tensorflow/tensorflow/lite/python/util.py | 188 | function | Apply standard TensorFlow optimizations to the graph_def.
Args:
graph_def: Frozen GraphDef to be optimized.
input_arrays: List of arrays that are considered inputs of the graph.
output_arrays: List of arrays that are considered outputs of the graph.
config: tf.ConfigProto.
graph: TensorFlow Graph. Required when Eager mode is enabled. (default None)
Returns:
A new, optimized GraphDef. |
712 | 710 | _convert_op_hints_if_present | tensorflow/tensorflow/lite/python/util.py | 230 | function | |
713 | 711 | freeze_graph | tensorflow/tensorflow/lite/python/util.py | 241 | function | Returns a frozen GraphDef.
Runs a Grappler pass and freezes a graph with Variables in it. Otherwise the
existing GraphDef is returned. The Grappler pass is only run on models that
are frozen in order to inline the functions in the graph.
If OpHints is present, it will try to convert the OpHint graph.
Args:
sess: TensorFlow Session.
input_tensors: List of input tensors.
output_tensors: List of output tensors (only .name is used from this).
Returns:
Frozen GraphDef. |
714 | 712 | is_frozen_graph | tensorflow/tensorflow/lite/python/util.py | 281 | function | Determines if the graph is frozen.
Determines if a graph has previously been frozen by checking for any
operations of type Variable*. If variables are found, the graph is not frozen.
Args:
sess: TensorFlow Session.
Returns:
Bool. |
715 | 713 | build_debug_info_func | tensorflow/tensorflow/lite/python/util.py | 300 | function | Returns a method to retrieve the `GraphDebugInfo` from the original graph.
Args:
original_graph: The original `Graph` containing all the op stack traces.
Returns:
A function which retrieves the stack traces from the original graph and
converts them to a `GraphDebugInfo` for a given set of nodes. |
716 | 714 | convert_debug_info_func | tensorflow/tensorflow/lite/python/util.py | 339 | function | Returns a method to retrieve the `GraphDebugInfo` from the original graph.
Args:
saved_debug_info: The `GraphDebugInfo` containing all the debug info.
Returns:
A function which retrieves the stack traces from the original graph and
converts them to a `GraphDebugInfo` for a given set of nodes. |
717 | 715 | get_debug_info | tensorflow/tensorflow/lite/python/util.py | 368 | function | Returns the debug info for the original nodes in the `converted_graph`.
Args:
nodes_to_debug_info_func: The method to collect the op debug info for the
nodes.
converted_graph: A `GraphDef` after optimization and transformation.
Returns:
`GraphDebugInfo` for all the original nodes in `converted_graph`. |
718 | 716 | convert_bytes_to_c_source | tensorflow/tensorflow/lite/python/util.py | 399 | function | Returns strings representing a C constant array containing `data`.
Args:
data: Byte array that will be converted into a C constant.
array_name: String to use as the variable name for the constant array.
max_line_width: The longest line length, for formatting purposes.
include_guard: Name to use for the include guard macro definition.
include_path: Optional path to include in the source file.
use_tensorflow_license: Whether to include the standard TensorFlow Apache2
license in the generated files.
Returns:
Text that can be compiled as a C source file to link in the data as a
literal array of values.
Text that can be used as a C header file to reference the literal array. |
719 | 717 | UtilTest | tensorflow/tensorflow/lite/python/util_test.py | 39 | class | |
720 | 718 | TensorFunctionsTest | tensorflow/tensorflow/lite/python/util_test.py | 124 | class | |
721 | 719 | wrapped_toco_convert | tensorflow/tensorflow/lite/python/wrap_toco.py | 29 | function | Wraps TocoConvert with lazy loader. |
722 | 720 | wrapped_get_potentially_supported_ops | tensorflow/tensorflow/lite/python/wrap_toco.py | 41 | function | Wraps TocoGetPotentiallySupportedOps with lazy loader. |
723 | 721 | wrapped_experimental_mlir_quantize | tensorflow/tensorflow/lite/python/wrap_toco.py | 46 | function | Wraps experimental mlir quantize model. |
724 | 722 | wrapped_experimental_mlir_sparsify | tensorflow/tensorflow/lite/python/wrap_toco.py | 55 | function | Wraps experimental mlir sparsify model. |
725 | 723 | Calibrator | tensorflow/tensorflow/lite/python/optimize/calibrator.py | 33 | class | Calibrates a floating point model and then quantizes it.
This is an internal class, not a public interface. |
726 | 724 | CalibratorTest | tensorflow/tensorflow/lite/python/optimize/calibrator_test.py | 33 | class | |
727 | 725 | TemporaryDirectoryResource | tensorflow/tensorflow/lite/schema/upgrade_schema.py | 57 | function | |
728 | 726 | Converter | tensorflow/tensorflow/lite/schema/upgrade_schema.py | 65 | class | Converts TensorFlow flatbuffer models from old to new version of schema.
This can convert between any version to the latest version. It uses
an incremental upgrade strategy to go from version to version.
Usage:
converter = Converter()
converter.Convert("a.tflite", "a.json")
converter.Convert("b.json", "b.tflite") |
729 | 727 | main | tensorflow/tensorflow/lite/schema/upgrade_schema.py | 344 | function | |
730 | 728 | JsonDumpAndFlush | tensorflow/tensorflow/lite/schema/upgrade_schema_test.py | 242 | function | Write the dictionary `data` to a JSON file `fp` (and flush).
Args:
data: in a dictionary that is JSON serializable.
fp: File-like object |
731 | 729 | TestSchemaUpgrade | tensorflow/tensorflow/lite/schema/upgrade_schema_test.py | 253 | class | |
732 | 730 | main | tensorflow/tensorflow/lite/testing/generate_examples.py | 100 | function | |
733 | 731 | MultiGenState | tensorflow/tensorflow/lite/testing/generate_examples_lib.py | 176 | class | State of multiple set generation process.
This state class stores the information needed when generating the examples
for multiple test set. The stored informations are open archive object to be
shared, information on test target for current iteration of generation,
accumulated generation results. |
734 | 732 | Options | tensorflow/tensorflow/lite/testing/generate_examples_lib.py | 203 | class | All options for example generation. |
735 | 733 | _prepare_dir | tensorflow/tensorflow/lite/testing/generate_examples_lib.py | 244 | function | |
736 | 734 | generate_examples | tensorflow/tensorflow/lite/testing/generate_examples_lib.py | 256 | function | Generate examples for a test set.
Args:
options: Options containing information to generate examples.
Raises:
RuntimeError: if the test function cannot be found. |
737 | 735 | generate_multi_set_examples | tensorflow/tensorflow/lite/testing/generate_examples_lib.py | 294 | function | Generate examples for test sets.
Args:
options: Options containing information to generate examples.
test_sets: List of the name of test sets to generate examples. |
738 | 736 | make_report_table | tensorflow/tensorflow/lite/testing/generate_examples_report.py | 32 | function | Make an HTML report of the success/failure reports.
Args:
fp: File-like object in which to put the html.
title: "Title of the zip file this pertains to."
reports: a list of conversion attempts. (report_args, report_vals) i.e.
({"shape": [1,2,3], "type": "tf.float32"},
{"tf": "SUCCESS", "toco": "FAILURE", "toco_log": "Unsupported type.",
"tf_log": ""}) |
739 | 737 | toco_options | tensorflow/tensorflow/lite/testing/toco_convert.py | 31 | function | Create TOCO options to process a model.
Args:
data_types: input and inference types used by TOCO.
input_arrays: names of the input tensors
output_arrays: name of the output tensors
shapes: shapes of the input tensors
extra_toco_options: additional toco options
Returns:
the options in a string. |
740 | 738 | toco_convert | tensorflow/tensorflow/lite/testing/toco_convert.py | 78 | function | Convert a model's graph def into a tflite model.
NOTE: this currently shells out to the toco binary, but we would like
convert to Python API tooling in the future.
Args:
options: An Options instance.
graph_def: A GraphDef object.
input_tensors: List of input tensor tuples `(name, shape, type)`.
output_tensors: List of output tensors (names).
**kwargs: Extra options to be passed.
Returns:
output tflite model, log_txt from conversion
or None, log_txt if it did not convert properly. |
741 | 739 | register_make_test_function | tensorflow/tensorflow/lite/testing/zip_test_utils.py | 55 | function | |
742 | 740 | get_test_function | tensorflow/tensorflow/lite/testing/zip_test_utils.py | 65 | function | Get the test function according to the test function name. |
743 | 741 | ExtraTocoOptions | tensorflow/tensorflow/lite/testing/zip_test_utils.py | 88 | class | Additional toco options besides input, output, shape. |
744 | 742 | create_tensor_data | tensorflow/tensorflow/lite/testing/zip_test_utils.py | 106 | function | Build tensor data spreading the range [min_value, max_value). |
745 | 743 | create_scalar_data | tensorflow/tensorflow/lite/testing/zip_test_utils.py | 126 | function | Build scalar tensor data range from min_value to max_value exclusively. |
746 | 744 | freeze_graph | tensorflow/tensorflow/lite/testing/zip_test_utils.py | 144 | function | Freeze the current graph.
Args:
session: Tensorflow sessions containing the graph
outputs: List of output tensors
Returns:
The frozen graph_def. |
747 | 745 | format_result | tensorflow/tensorflow/lite/testing/zip_test_utils.py | 158 | function | Convert a tensor to a format that can be used in test specs. |
748 | 746 | write_examples | tensorflow/tensorflow/lite/testing/zip_test_utils.py | 168 | function | Given a list `examples`, write a text format representation.
The file format is csv like with a simple repeated pattern. We would ike
to use proto here, but we can't yet due to interfacing with the Android
team using this format.
Args:
fp: File-like object to write to.
examples: Example dictionary consisting of keys "inputs" and "outputs" |
749 | 747 | write_test_cases | tensorflow/tensorflow/lite/testing/zip_test_utils.py | 196 | function | Given a dictionary of `examples`, write a text format representation.
The file format is protocol-buffer-like, even though we don't use proto due
to the needs of the Android team.
Args:
fp: File-like object to write to.
model_name: Filename where the model was written to, relative to filename.
examples: Example dictionary consisting of keys "inputs" and "outputs" |
750 | 748 | get_input_shapes_map | tensorflow/tensorflow/lite/testing/zip_test_utils.py | 225 | function | Gets a map of input names to shapes.
Args:
input_tensors: List of input tensor tuples `(name, shape, type)`.
Returns:
{string : list of integers}. |
751 | 749 | _normalize_output_name | tensorflow/tensorflow/lite/testing/zip_test_utils.py | 251 | function | Remove :0 suffix from tensor names. |
752 | 750 | make_zip_of_tests | tensorflow/tensorflow/lite/testing/zip_test_utils.py | 262 | function | Helper to make a zip file of a bunch of TensorFlow models.
This does a cartesian product of the dictionary of test_parameters and
calls make_graph() for each item in the cartesian product set.
If the graph is built successfully, then make_test_inputs() is called to
build expected input/output value pairs. The model is then converted to tflite
with toco, and the examples are serialized with the tflite model into a zip
file (2 files per item in the cartesian product set).
Args:
options: An Options instance.
test_parameters: Dictionary mapping to lists for each parameter.
e.g. `{"strides": [[1,3,3,1], [1,2,2,1]], "foo": [1.2, 1.3]}`
make_graph: function that takes current parameters and returns tuple
`[input1, input2, ...], [output1, output2, ...]`
make_test_inputs: function taking `curr_params`, `session`, `input_tensors`,
`output_tensors` and returns tuple `(input_values, output_values)`.
extra_toco_options: Additional toco options.
use_frozen_graph: Whether or not freeze graph before toco converter.
expected_tf_failures: Number of times tensorflow is expected to fail in
executing the input graphs. In some cases it is OK for TensorFlow to fail
because the one or more combination of parameters is invalid.
Raises:
RuntimeError: if there are converter errors that can't be ignored. |
753 | 751 | get_filepath | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 47 | function | Returns the full path of the filename.
Args:
filename: Subdirectory and name of the model file.
base_dir: Base directory containing model file.
Returns:
str. |
754 | 752 | get_image | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 63 | function | Returns an image loaded into an np.ndarray with dims [1, size, size, 3].
Args:
size: Size of image.
Returns:
np.ndarray. |
755 | 753 | _convert | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 80 | function | Converts the model.
Args:
converter: TFLiteConverter object.
**kwargs: Additional arguments to be passed into the converter. Supported
flags are {"target_ops", "post_training_quantize", "quantize_to_float16"}.
Returns:
The converted TFLite model in serialized format.
Raises:
ValueError: Invalid version number. |
756 | 754 | _get_tflite_interpreter | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 103 | function | Creates a TFLite interpreter with resized input tensors.
Args:
tflite_model: Serialized TensorFlow Lite model.
input_shapes_resize: A map where the key is the input tensor name and the
value is the shape of the input tensor. This resize happens after model
conversion, prior to calling allocate tensors. (default None)
Returns:
lite.Interpreter |
757 | 755 | _get_input_data_map | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 127 | function | Generates a map of input data based on the TFLite model.
Args:
tflite_model: Serialized TensorFlow Lite model.
input_data: List of np.ndarray.
Returns:
{str: [np.ndarray]}. |
758 | 756 | _generate_random_input_data | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 146 | function | Generates input data based on the input tensors in the TFLite model.
Args:
tflite_model: Serialized TensorFlow Lite model.
seed: Integer seed for the random generator. (default None)
input_data_range: A map where the key is the input tensor name and
the value is a tuple (min_val, max_val) which specifies the value range of
the corresponding input tensor. For example, '{'input1': (1, 5)}' means to
generate a random value for tensor `input1` within range [1.0, 5.0)
(half-inclusive). (default None)
input_shapes_resize: A map where the key is the input tensor name and the
value is the shape of the input tensor. This resize happens after model
conversion, prior to calling allocate tensors. (default None)
Returns:
([np.ndarray], {str : [np.ndarray]}). |
759 | 757 | _evaluate_tflite_model | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 191 | function | Returns evaluation of input data on TFLite model.
Args:
tflite_model: Serialized TensorFlow Lite model.
input_data: List of np.ndarray.
input_shapes_resize: A map where the key is the input tensor name and the
value is the shape of the input tensor. This resize happens after model
conversion, prior to calling allocate tensors. (default None)
Returns:
List of np.ndarray. |
760 | 758 | evaluate_frozen_graph | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 222 | function | Returns a function that evaluates the frozen graph on input data.
Args:
filename: Full filepath of file containing frozen GraphDef.
input_arrays: List of input tensors to freeze graph with.
output_arrays: List of output tensors to freeze graph with.
Returns:
Lambda function ([np.ndarray data] : [np.ndarray result]). |
761 | 759 | evaluate_saved_model | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 260 | function | Returns a function that evaluates the SavedModel on input data.
Args:
directory: SavedModel directory to convert.
tag_set: Set of tags identifying the MetaGraphDef within the SavedModel to
analyze. All tags in the tag set must be present.
signature_key: Key identifying SignatureDef containing inputs and outputs.
Returns:
Lambda function ([np.ndarray data] : [np.ndarray result]). |
762 | 760 | evaluate_keras_model | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 286 | function | Returns a function that evaluates the tf.keras model on input data.
Args:
filename: Full filepath of HDF5 file containing the tf.keras model.
Returns:
Lambda function ([np.ndarray data] : [np.ndarray result]). |
763 | 761 | compare_models | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 299 | function | Compares TensorFlow and TFLite models.
Unless the input data is provided, the models are compared with random data.
Args:
tflite_model: Serialized TensorFlow Lite model.
tf_eval_func: Lambda function that takes in input data and outputs the
results of the TensorFlow model ([np.ndarray data] : [np.ndarray result]).
input_shapes_resize: A map where the key is the input tensor name and the
value is the shape of the input tensor. This resize happens after model
conversion, prior to calling allocate tensors. (default None)
input_data: np.ndarray to pass into models during inference. (default None)
input_data_range: A map where the key is the input tensor name and
the value is a tuple (min_val, max_val) which specifies the value range of
the corresponding input tensor. For example, '{'input1': (1, 5)}' means to
generate a random value for tensor `input1` within range [1.0, 5.0)
(half-inclusive). (default None)
tolerance: Decimal place to check accuracy to. (default 5). |
764 | 762 | compare_models_v2 | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 336 | function | Compares TensorFlow and TFLite models for TensorFlow 2.0.
Unless the input data is provided, the models are compared with random data.
Currently only 1 input and 1 output are supported by this function.
Args:
tflite_model: Serialized TensorFlow Lite model.
tf_eval_func: Function to evaluate TensorFlow model. Either a lambda
function that takes in input data and outputs the results or a TensorFlow
ConcreteFunction.
input_data: np.ndarray to pass into models during inference. (default None).
input_data_range: A map where the key is the input tensor name and
the value is a tuple (min_val, max_val) which specifies the value range of
the corresponding input tensor. For example, '{'input1': (1, 5)}' means to
generate a random value for tensor `input1` within range [1.0, 5.0)
(half-inclusive). (default None)
tolerance: Decimal place to check accuracy to. (default 5) |
765 | 763 | test_frozen_graph_quant | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 390 | function | Sanity check to validate post quantize flag alters the graph.
This test does not check correctness of the converted model. It converts the
TensorFlow frozen graph to TFLite with and without the post_training_quantized
flag. It ensures some tensors have different types between the float and
quantized models in the case of an all TFLite model or mix-and-match model.
It ensures tensor types do not change in the case of an all Flex model.
Args:
filename: Full filepath of file containing frozen GraphDef.
input_arrays: List of input tensors to freeze graph with.
output_arrays: List of output tensors to freeze graph with.
input_shapes: Dict of strings representing input tensor names to list of
integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}).
Automatically determined when input shapes is None (e.g., {"foo" : None}).
(default None)
**kwargs: Additional arguments to be passed into the converter.
Raises:
ValueError: post_training_quantize flag doesn't act as intended. |
766 | 764 | test_frozen_graph | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 459 | function | Validates the TensorFlow frozen graph converts to a TFLite model.
Converts the TensorFlow frozen graph to TFLite and checks the accuracy of the
model on random data.
Args:
filename: Full filepath of file containing frozen GraphDef.
input_arrays: List of input tensors to freeze graph with.
output_arrays: List of output tensors to freeze graph with.
input_shapes: Dict of strings representing input tensor names to list of
integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}).
Automatically determined when input shapes is None (e.g., {"foo" : None}).
(default None)
input_shapes_resize: A map where the key is the input tensor name and the
value is the shape of the input tensor. This resize happens after model
conversion, prior to calling allocate tensors. (default None)
input_data: np.ndarray to pass into models during inference. (default None).
input_data_range: A map where the key is the input tensor name and
the value is a tuple (min_val, max_val) which specifies the value range of
the corresponding input tensor. For example, '{'input1': (1, 5)}' means to
generate a random value for tensor `input1` within range [1.0, 5.0)
(half-inclusive). (default None)
**kwargs: Additional arguments to be passed into the converter. |
767 | 765 | test_saved_model | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 504 | function | Validates the TensorFlow SavedModel converts to a TFLite model.
Converts the TensorFlow SavedModel to TFLite and checks the accuracy of the
model on random data.
Args:
directory: SavedModel directory to convert.
input_shapes: Dict of strings representing input tensor names to list of
integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}).
Automatically determined when input shapes is None (e.g., {"foo" : None}).
(default None)
tag_set: Set of tags identifying the MetaGraphDef within the SavedModel to
analyze. All tags in the tag set must be present.
signature_key: Key identifying SignatureDef containing inputs and outputs.
input_data: np.ndarray to pass into models during inference. (default None).
input_data_range: A map where the key is the input tensor name and
the value is a tuple (min_val, max_val) which specifies the value range of
the corresponding input tensor. For example, '{'input1': (1, 5)}' means to
generate a random value for tensor `input1` within range [1.0, 5.0)
(half-inclusive). (default None)
**kwargs: Additional arguments to be passed into the converter. |
768 | 766 | test_saved_model_v2 | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 548 | function | Validates the TensorFlow SavedModel converts to a TFLite model.
Converts the TensorFlow SavedModel to TFLite and checks the accuracy of the
model on random data.
Args:
directory: SavedModel directory to convert.
tag_set: Set of tags identifying the MetaGraphDef within the SavedModel to
analyze. All tags in the tag set must be present.
signature_key: Key identifying SignatureDef containing inputs and outputs.
input_data: np.ndarray to pass into models during inference. (default None).
input_data_range: A map where the key is the input tensor name and
the value is a tuple (min_val, max_val) which specifies the value range of
the corresponding input tensor. For example, '{'input1': (1, 5)}' means to
generate a random value for tensor `input1` within range [1.0, 5.0)
(half-inclusive). (default None)
**kwargs: Additional arguments to be passed into the converter. |
769 | 767 | test_saved_model_v2_quant_float16 | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 587 | function | Validates the TensorFlow SavedModel converts to a TFLite model. |
770 | 768 | test_keras_model | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 623 | function | Validates the tf.keras model converts to a TFLite model.
Converts the tf.keras model to TFLite and checks the accuracy of the model on
random data.
Args:
filename: Full filepath of HDF5 file containing the tf.keras model.
input_arrays: List of input tensors to freeze graph with.
input_shapes: Dict of strings representing input tensor names to list of
integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}).
Automatically determined when input shapes is None (e.g., {"foo" : None}).
(default None)
input_data: np.ndarray to pass into models during inference. (default None).
input_data_range: A map where the key is the input tensor name and
the value is a tuple (min_val, max_val) which specifies the value range of
the corresponding input tensor. For example, '{'input1': (1, 5)}' means to
generate a random value for tensor `input1` within range [1.0, 5.0)
(half-inclusive). (default None)
**kwargs: Additional arguments to be passed into the converter. |
771 | 769 | test_keras_model_v2 | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 661 | function | Validates the tf.keras model converts to a TFLite model.
Converts the tf.keras model to TFLite and checks the accuracy of the model on
random data.
Args:
filename: Full filepath of HDF5 file containing the tf.keras model.
input_shapes: List of list of integers representing input shapes in the
order of the tf.keras model's .input attribute (e.g., [[1, 16, 16, 3]]).
(default None)
input_data: np.ndarray to pass into models during inference. (default None).
input_data_range: A map where the key is the input tensor name and
the value is a tuple (min_val, max_val) which specifies the value range of
the corresponding input tensor. For example, '{'input1': (1, 5)}' means to
generate a random value for tensor `input1` within range [1.0, 5.0)
(half-inclusive). (default None)
**kwargs: Additional arguments to be passed into the converter. |
772 | 770 | EvaluateFrozenGraph | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib_test.py | 42 | class | |
773 | 771 | EvaluateSavedModel | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib_test.py | 142 | class | |
774 | 772 | EvaluateKerasModel | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib_test.py | 160 | class | |
775 | 773 | make_abs_tests | tensorflow/tensorflow/lite/testing/op_tests/abs.py | 28 | function | Make a set of tests to do abs. |
776 | 774 | make_add_n_tests | tensorflow/tensorflow/lite/testing/op_tests/add_n.py | 27 | function | Make a set of tests for AddN op. |
777 | 775 | make_arg_min_max_tests | tensorflow/tensorflow/lite/testing/op_tests/arg_min_max.py | 29 | function | Make a set of tests to do arg_max. |
778 | 776 | make_batch_to_space_nd_tests | tensorflow/tensorflow/lite/testing/op_tests/batch_to_space_nd.py | 28 | function | Make a set of tests to do batch_to_space_nd. |
779 | 777 | make_binary_op_tests | tensorflow/tensorflow/lite/testing/op_tests/binary_op.py | 26 | function | Make a set of tests to do binary ops with and without broadcast. |
780 | 778 | make_binary_op_tests_func | tensorflow/tensorflow/lite/testing/op_tests/binary_op.py | 239 | function | Return a function that does a test on a binary operator. |
781 | 779 | make_add_tests | tensorflow/tensorflow/lite/testing/op_tests/binary_op.py | 245 | function | |
782 | 780 | make_div_tests | tensorflow/tensorflow/lite/testing/op_tests/binary_op.py | 250 | function | Make zip tests for div op with 5D case. |
783 | 781 | make_sub_tests | tensorflow/tensorflow/lite/testing/op_tests/binary_op.py | 267 | function | Make zip tests for sub op with additional cases. |
784 | 782 | make_mul_tests | tensorflow/tensorflow/lite/testing/op_tests/binary_op.py | 287 | function | |
785 | 783 | make_pow_tests | tensorflow/tensorflow/lite/testing/op_tests/binary_op.py | 292 | function | |
786 | 784 | make_floor_div_tests | tensorflow/tensorflow/lite/testing/op_tests/binary_op.py | 297 | function | |
787 | 785 | make_floor_mod_tests | tensorflow/tensorflow/lite/testing/op_tests/binary_op.py | 302 | function | |
788 | 786 | make_squared_difference_tests | tensorflow/tensorflow/lite/testing/op_tests/binary_op.py | 307 | function | |
789 | 787 | make_cast_tests | tensorflow/tensorflow/lite/testing/op_tests/cast.py | 27 | function | Generate examples for cast. |
790 | 788 | make_ceil_tests | tensorflow/tensorflow/lite/testing/op_tests/ceil.py | 27 | function | Make a set of tests to do ceil. |
791 | 789 | make_concat_tests | tensorflow/tensorflow/lite/testing/op_tests/concat.py | 27 | function | Make a set of tests to do concatenation. |
792 | 790 | make_constant_tests | tensorflow/tensorflow/lite/testing/op_tests/constant.py | 31 | function | Make a set of tests to do constant ops. |
793 | 791 | make_conv_tests | tensorflow/tensorflow/lite/testing/op_tests/conv.py | 28 | function | Make a set of tests to do convolution. |
794 | 792 | make_conv2d_transpose_tests | tensorflow/tensorflow/lite/testing/op_tests/conv2d_transpose.py | 28 | function | Make a set of tests to do transpose_conv. |
795 | 793 | make_conv_activation_tests | tensorflow/tensorflow/lite/testing/op_tests/conv_activation.py | 27 | function | Make a set of tests to do convolution with activation. |
796 | 794 | make_conv_relu6_tests | tensorflow/tensorflow/lite/testing/op_tests/conv_activation.py | 132 | function | Make a set of tests to do conv_relu6. |
797 | 795 | make_conv_relu_tests | tensorflow/tensorflow/lite/testing/op_tests/conv_activation.py | 138 | function | Make a set of tests to do conv_relu. |
798 | 796 | relu1 | tensorflow/tensorflow/lite/testing/op_tests/conv_activation.py | 143 | function | |
799 | 797 | make_conv_relu1_tests | tensorflow/tensorflow/lite/testing/op_tests/conv_activation.py | 151 | function | Make a set of tests to do conv_relu1. |
800 | 798 | make_conv_to_depthwiseconv_with_shared_weights_tests | tensorflow/tensorflow/lite/testing/op_tests/conv_to_depthwiseconv_with_shared_weights.py | 28 | function | Make a test where 2 Conv ops shared the same constant weight tensor. |
801 | 799 | make_conv_with_shared_weights_tests | tensorflow/tensorflow/lite/testing/op_tests/conv_with_shared_weights.py | 28 | function | Make a test where 2 Conv ops shared the same constant weight tensor. |
802 | 800 | make_cos_tests | tensorflow/tensorflow/lite/testing/op_tests/cos.py | 28 | function | Make a set of tests to do cos. |
803 | 801 | make_depth_to_space_tests | tensorflow/tensorflow/lite/testing/op_tests/depth_to_space.py | 27 | function | Make a set of tests to do depth_to_space. |
804 | 802 | make_depthwiseconv_tests | tensorflow/tensorflow/lite/testing/op_tests/depthwiseconv.py | 28 | function | Make a set of tests to do convolution. |
805 | 803 | _make_elementwise_tests | tensorflow/tensorflow/lite/testing/op_tests/elementwise.py | 26 | function | Make a set of tests to do element-wise operations. |
806 | 804 | make_sin_tests | tensorflow/tensorflow/lite/testing/op_tests/elementwise.py | 57 | function | Make a set of tests to do sin. |
807 | 805 | make_log_tests | tensorflow/tensorflow/lite/testing/op_tests/elementwise.py | 63 | function | Make a set of tests to do log. |
808 | 806 | make_sqrt_tests | tensorflow/tensorflow/lite/testing/op_tests/elementwise.py | 69 | function | Make a set of tests to do sqrt. |
809 | 807 | make_rsqrt_tests | tensorflow/tensorflow/lite/testing/op_tests/elementwise.py | 75 | function | Make a set of tests to do 1/sqrt. |
810 | 808 | make_square_tests | tensorflow/tensorflow/lite/testing/op_tests/elementwise.py | 81 | function | Make a set of tests to do square. |
811 | 809 | make_elu_tests | tensorflow/tensorflow/lite/testing/op_tests/elu.py | 28 | function | Make a set of tests to do (float) tf.nn.elu. |
812 | 810 | make_embedding_lookup_tests | tensorflow/tensorflow/lite/testing/op_tests/embedding_lookup.py | 27 | function | Make a set of tests to do gather. |
813 | 811 | make_equal_tests | tensorflow/tensorflow/lite/testing/op_tests/equal.py | 27 | function | Make a set of tests to do equal. |
814 | 812 | make_exp_tests | tensorflow/tensorflow/lite/testing/op_tests/exp.py | 27 | function | Make a set of tests to do exp. |
815 | 813 | make_expand_dims_tests | tensorflow/tensorflow/lite/testing/op_tests/expand_dims.py | 28 | function | Make a set of tests to do expand_dims. |
816 | 814 | make_eye_tests | tensorflow/tensorflow/lite/testing/op_tests/eye.py | 28 | function | Make a set of tests for tf.eye op. |
817 | 815 | make_fill_tests | tensorflow/tensorflow/lite/testing/op_tests/fill.py | 28 | function | Make a set of tests to do fill. |
818 | 816 | make_floor_tests | tensorflow/tensorflow/lite/testing/op_tests/floor.py | 27 | function | Make a set of tests to do floor. |
819 | 817 | make_fully_connected_tests | tensorflow/tensorflow/lite/testing/op_tests/fully_connected.py | 28 | function | Make a set of tests to do fully_connected. |
820 | 818 | make_fused_batch_norm_tests | tensorflow/tensorflow/lite/testing/op_tests/fused_batch_norm.py | 27 | function | Make a set of tests to do fused_batch_norm. |
821 | 819 | make_gather_tests | tensorflow/tensorflow/lite/testing/op_tests/gather.py | 27 | function | Make a set of tests to do gather. |
822 | 820 | make_gather_nd_tests | tensorflow/tensorflow/lite/testing/op_tests/gather_nd.py | 27 | function | Make a set of tests to do gather_nd. |
823 | 821 | make_gather_with_constant_tests | tensorflow/tensorflow/lite/testing/op_tests/gather_with_constant.py | 28 | function | Make a set of test which feed a constant to gather toco. |
824 | 822 | make_global_batch_norm_tests | tensorflow/tensorflow/lite/testing/op_tests/global_batch_norm.py | 27 | function | Make a set of tests to do batch_norm_with_global_normalization. |
825 | 823 | make_greater_tests | tensorflow/tensorflow/lite/testing/op_tests/greater.py | 27 | function | Make a set of tests to do greater. |
826 | 824 | make_greater_equal_tests | tensorflow/tensorflow/lite/testing/op_tests/greater_equal.py | 27 | function | Make a set of tests to do greater_equal. |
827 | 825 | _tflite_convert_verify_num_ops | tensorflow/tensorflow/lite/testing/op_tests/hardswish.py | 29 | function | Verifies that the result of the conversion is a single op. |
828 | 826 | make_hardswish_tests | tensorflow/tensorflow/lite/testing/op_tests/hardswish.py | 47 | function | Make a set of tests to do hardswish. |
829 | 827 | make_identity_tests | tensorflow/tensorflow/lite/testing/op_tests/identity.py | 29 | function | Make a set of tests to do identity. |
830 | 828 | make_l2norm_tests | tensorflow/tensorflow/lite/testing/op_tests/l2norm.py | 28 | function | Make a set of tests to do l2norm. |
831 | 829 | make_l2norm_shared_epsilon_tests | tensorflow/tensorflow/lite/testing/op_tests/l2norm_shared_epsilon.py | 28 | function | Regression test for a bug (b/122651451). |
832 | 830 | make_leaky_relu_tests | tensorflow/tensorflow/lite/testing/op_tests/leaky_relu.py | 28 | function | Make a set of tests to do LeakyRelu. |
833 | 831 | make_less_tests | tensorflow/tensorflow/lite/testing/op_tests/less.py | 27 | function | Make a set of tests to do less. |
834 | 832 | make_less_equal_tests | tensorflow/tensorflow/lite/testing/op_tests/less_equal.py | 27 | function | Make a set of tests to do less_equal. |
835 | 833 | make_local_response_norm_tests | tensorflow/tensorflow/lite/testing/op_tests/local_response_norm.py | 28 | function | Make a set of tests to do local_response_norm. |
836 | 834 | make_log_softmax_tests | tensorflow/tensorflow/lite/testing/op_tests/log_softmax.py | 27 | function | Make a set of tests to do log_softmax. |
837 | 835 | _make_logical_tests | tensorflow/tensorflow/lite/testing/op_tests/logic.py | 26 | function | Make a set of tests to do logical operations. |
838 | 836 | make_logical_or_tests | tensorflow/tensorflow/lite/testing/op_tests/logic.py | 65 | function | Make a set of tests to do logical_or. |
839 | 837 | make_logical_and_tests | tensorflow/tensorflow/lite/testing/op_tests/logic.py | 71 | function | Make a set of tests to do logical_and. |
840 | 838 | make_logical_xor_tests | tensorflow/tensorflow/lite/testing/op_tests/logic.py | 77 | function | Make a set of tests to do logical_xor, test logical_not as well. |
841 | 839 | make_lstm_tests | tensorflow/tensorflow/lite/testing/op_tests/lstm.py | 29 | function | Make a set of tests to do basic Lstm cell. |
842 | 840 | make_matrix_diag_tests | tensorflow/tensorflow/lite/testing/op_tests/matrix_diag.py | 27 | function | Make a set of tests for tf.linalg.diag op. |
843 | 841 | make_matrix_set_diag_tests | tensorflow/tensorflow/lite/testing/op_tests/matrix_set_diag.py | 27 | function | Make a set of tests for tf.linalg.set_diag op. |
844 | 842 | make_maximum_tests | tensorflow/tensorflow/lite/testing/op_tests/maximum.py | 27 | function | Make a set of tests to do maximum. |
845 | 843 | make_minimum_tests | tensorflow/tensorflow/lite/testing/op_tests/minimum.py | 27 | function | Make a set of tests to do minimum. |
846 | 844 | make_mirror_pad_tests | tensorflow/tensorflow/lite/testing/op_tests/mirror_pad.py | 28 | function | Make a set of tests to do mirror_pad. |
847 | 845 | make_nearest_upsample_tests | tensorflow/tensorflow/lite/testing/op_tests/nearest_upsample.py | 27 | function | Make a set of tests to do nearest_upsample. |
848 | 846 | make_neg_tests | tensorflow/tensorflow/lite/testing/op_tests/neg.py | 27 | function | Make a set of tests to do neg. |
849 | 847 | make_not_equal_tests | tensorflow/tensorflow/lite/testing/op_tests/not_equal.py | 27 | function | Make a set of tests to do not equal. |
850 | 848 | make_one_hot_tests | tensorflow/tensorflow/lite/testing/op_tests/one_hot.py | 27 | function | Make a set of tests to do one_hot. |
851 | 849 | make_pack_tests | tensorflow/tensorflow/lite/testing/op_tests/pack.py | 28 | function | Make a set of tests to do stack. |
852 | 850 | make_pad_tests | tensorflow/tensorflow/lite/testing/op_tests/pad.py | 28 | function | Make a set of tests to do pad. |
853 | 851 | make_padv2_tests | tensorflow/tensorflow/lite/testing/op_tests/padv2.py | 28 | function | Make a set of tests to do padv2. |
854 | 852 | make_placeholder_with_default_tests | tensorflow/tensorflow/lite/testing/op_tests/placeholder_with_default.py | 28 | function | Make a set of tests to test placeholder_with_default. |
855 | 853 | make_pool_tests | tensorflow/tensorflow/lite/testing/op_tests/pool.py | 26 | function | Make a set of tests to do average pooling.
Args:
pool_op_in: TensorFlow pooling operation to test i.e. `tf.nn.avg_pool2d`.
allow_fully_quantize: bool, whether fully_quantize is allowed.
Returns:
A function representing the true generator (after curried pool_op_in). |
856 | 854 | make_l2_pool | tensorflow/tensorflow/lite/testing/op_tests/pool.py | 119 | function | Given an input perform a sequence of TensorFlow ops to produce l2pool. |
857 | 855 | make_l2_pool_tests | tensorflow/tensorflow/lite/testing/op_tests/pool.py | 131 | function | |
858 | 856 | make_avg_pool_tests | tensorflow/tensorflow/lite/testing/op_tests/pool.py | 136 | function | |
859 | 857 | make_max_pool_tests | tensorflow/tensorflow/lite/testing/op_tests/pool.py | 143 | function | |
860 | 858 | make_prelu_tests | tensorflow/tensorflow/lite/testing/op_tests/prelu.py | 28 | function | Make a set of tests to do PReLU. |
861 | 859 | make_range_tests | tensorflow/tensorflow/lite/testing/op_tests/range.py | 27 | function | Make a set of tests to do range. |
862 | 860 | make_rank_tests | tensorflow/tensorflow/lite/testing/op_tests/rank.py | 27 | function | Make a set of tests to do rank. |
863 | 861 | make_reduce_tests | tensorflow/tensorflow/lite/testing/op_tests/reduce.py | 27 | function | Make a set of tests to do reduce operation.
Args:
reduce_op: TensorFlow reduce operation to test, i.e. `tf.reduce_mean`.
min_value: min value for created tensor data.
max_value: max value for created tensor data.
boolean_tensor_only: If true, will only generate tensor with boolean value.
allow_fully_quantize: bool, whether fully_quantize is allowed.
Returns:
a function representing the true generator with `reduce_op_in` curried. |
864 | 862 | make_mean_tests | tensorflow/tensorflow/lite/testing/op_tests/reduce.py | 219 | function | Make a set of tests to do mean. |
865 | 863 | make_sum_tests | tensorflow/tensorflow/lite/testing/op_tests/reduce.py | 231 | function | Make a set of tests to do sum. |
866 | 864 | make_reduce_prod_tests | tensorflow/tensorflow/lite/testing/op_tests/reduce.py | 243 | function | Make a set of tests to do prod. |
867 | 865 | make_reduce_max_tests | tensorflow/tensorflow/lite/testing/op_tests/reduce.py | 250 | function | Make a set of tests to do max. |
868 | 866 | make_reduce_min_tests | tensorflow/tensorflow/lite/testing/op_tests/reduce.py | 258 | function | Make a set of tests to do min. |
869 | 867 | make_reduce_any_tests | tensorflow/tensorflow/lite/testing/op_tests/reduce.py | 266 | function | Make a set of tests to do any. |
870 | 868 | make_relu_tests | tensorflow/tensorflow/lite/testing/op_tests/relu.py | 28 | function | Make a set of tests to do relu. |
871 | 869 | make_relu1_tests | tensorflow/tensorflow/lite/testing/op_tests/relu1.py | 28 | function | Make a set of tests to do relu1. |
872 | 870 | make_relu6_tests | tensorflow/tensorflow/lite/testing/op_tests/relu6.py | 28 | function | Make a set of tests to do relu6. |
873 | 871 | make_reshape_tests | tensorflow/tensorflow/lite/testing/op_tests/reshape.py | 28 | function | Make a set of tests to do reshape. |
874 | 872 | make_resize_bilinear_tests | tensorflow/tensorflow/lite/testing/op_tests/resize_bilinear.py | 27 | function | Make a set of tests to do resize_bilinear. |
875 | 873 | make_resize_nearest_neighbor_tests | tensorflow/tensorflow/lite/testing/op_tests/resize_nearest_neighbor.py | 27 | function | Make a set of tests to do resize_nearest_neighbor. |
876 | 874 | make_resolve_constant_strided_slice_tests | tensorflow/tensorflow/lite/testing/op_tests/resolve_constant_strided_slice.py | 29 | function | Make a set of tests to show strided_slice yields incorrect results. |
877 | 875 | make_reverse_sequence_tests | tensorflow/tensorflow/lite/testing/op_tests/reverse_sequence.py | 27 | function | Make a set of tests to do reverse_sequence. |
878 | 876 | make_reverse_v2_tests | tensorflow/tensorflow/lite/testing/op_tests/reverse_v2.py | 27 | function | Make a set of tests to do reverse_v2. |
879 | 877 | make_rfft2d_tests | tensorflow/tensorflow/lite/testing/op_tests/rfft2d.py | 28 | function | Make a set of tests to do rfft2d. |
880 | 878 | make_round_tests | tensorflow/tensorflow/lite/testing/op_tests/round.py | 27 | function | Build the round op testing graph. |
881 | 879 | make_scatter_nd_tests | tensorflow/tensorflow/lite/testing/op_tests/scatter_nd.py | 28 | function | Make a set of tests to do scatter_nd. |
882 | 880 | make_shape_tests | tensorflow/tensorflow/lite/testing/op_tests/shape.py | 28 | function | Make a set of tests to do shape. |
883 | 881 | make_sigmoid_tests | tensorflow/tensorflow/lite/testing/op_tests/sigmoid.py | 27 | function | Make a set of tests to do sigmoid. |
884 | 882 | make_slice_tests | tensorflow/tensorflow/lite/testing/op_tests/slice.py | 29 | function | Make a set of tests to do slice. |
885 | 883 | make_softmax_tests | tensorflow/tensorflow/lite/testing/op_tests/softmax.py | 27 | function | Make a set of tests to do softmax. |
886 | 884 | make_space_to_batch_nd_tests | tensorflow/tensorflow/lite/testing/op_tests/space_to_batch_nd.py | 28 | function | Make a set of tests to do space_to_batch_nd. |
887 | 885 | make_space_to_depth_tests | tensorflow/tensorflow/lite/testing/op_tests/space_to_depth.py | 27 | function | Make a set of tests to do space_to_depth. |
888 | 886 | make_sparse_to_dense_tests | tensorflow/tensorflow/lite/testing/op_tests/sparse_to_dense.py | 29 | function | Make a set of tests to do sparse to dense. |
889 | 887 | make_split_tests | tensorflow/tensorflow/lite/testing/op_tests/split.py | 28 | function | Make a set of tests to do tf.split. |
890 | 888 | make_splitv_tests | tensorflow/tensorflow/lite/testing/op_tests/splitv.py | 28 | function | Make a set of tests to do tf.split_v. |
891 | 889 | make_squeeze_tests | tensorflow/tensorflow/lite/testing/op_tests/squeeze.py | 27 | function | Make a set of tests to do squeeze. |
892 | 890 | make_squeeze_transpose_tests | tensorflow/tensorflow/lite/testing/op_tests/squeeze_transpose.py | 27 | function | Make a set of tests to do squeeze followed by transpose. |
893 | 891 | _make_strided_slice_tests | tensorflow/tensorflow/lite/testing/op_tests/strided_slice.py | 28 | function | Utility function to make strided_slice_tests based on parameters. |
894 | 892 | make_strided_slice_tests | tensorflow/tensorflow/lite/testing/op_tests/strided_slice.py | 100 | function | Make a set of tests to do strided_slice. |
895 | 893 | make_strided_slice_1d_exhaustive_tests | tensorflow/tensorflow/lite/testing/op_tests/strided_slice.py | 208 | function | Make a set of exhaustive tests for 1D strided_slice. |
896 | 894 | make_strided_slice_np_style_tests | tensorflow/tensorflow/lite/testing/op_tests/strided_slice_np_style.py | 29 | function | Make a set of tests to test strided_slice in np style. |
897 | 895 | make_tanh_tests | tensorflow/tensorflow/lite/testing/op_tests/tanh.py | 28 | function | Make a set of tests to do tanh. |
898 | 896 | make_tile_tests | tensorflow/tensorflow/lite/testing/op_tests/tile.py | 27 | function | Make a set of tests to do tile. |
899 | 897 | make_topk_tests | tensorflow/tensorflow/lite/testing/op_tests/topk.py | 28 | function | Make a set of tests to do topk. |
900 | 898 | make_transpose_tests | tensorflow/tensorflow/lite/testing/op_tests/transpose.py | 28 | function | Make a set of tests to do transpose. |
901 | 899 | make_transpose_conv_tests | tensorflow/tensorflow/lite/testing/op_tests/transpose_conv.py | 33 | function | Make a set of tests to do transpose_conv. |
902 | 900 | make_unfused_gru_tests | tensorflow/tensorflow/lite/testing/op_tests/unfused_gru.py | 27 | function | Make a set of tests for unfused gru op. |
903 | 901 | make_unidirectional_sequence_lstm_tests | tensorflow/tensorflow/lite/testing/op_tests/unidirectional_sequence_lstm.py | 29 | function | Make a set of tests to do unidirectional_sequence_lstm. |
904 | 902 | make_unidirectional_sequence_rnn_tests | tensorflow/tensorflow/lite/testing/op_tests/unidirectional_sequence_rnn.py | 29 | function | Make a set of tests to do unidirectional_sequence_rnn. |
905 | 903 | make_unique_tests | tensorflow/tensorflow/lite/testing/op_tests/unique.py | 27 | function | Make a set of tests for Unique op. |
906 | 904 | make_unpack_tests | tensorflow/tensorflow/lite/testing/op_tests/unpack.py | 27 | function | Make a set of tests to do unpack. |
907 | 905 | make_unroll_batch_matmul_tests | tensorflow/tensorflow/lite/testing/op_tests/unroll_batch_matmul.py | 27 | function | Make a set of tests to test unroll_batch_matmul. |
908 | 906 | make_where_tests | tensorflow/tensorflow/lite/testing/op_tests/where.py | 27 | function | Make a set of tests to do where. |
909 | 907 | make_zeros_like_tests | tensorflow/tensorflow/lite/testing/op_tests/zeros_like.py | 27 | function | Make a set of tests to do zeros_like. |
910 | 908 | html_escape | tensorflow/tensorflow/lite/toco/logging/gen_html.py | 37 | function | |
911 | 909 | get_input_type_from_signature | tensorflow/tensorflow/lite/toco/logging/gen_html.py | 41 | function | Parses op_signature and returns a string denoting the input tensor type.
Args:
op_signature: a string specifying the signature of a particular operator.
The signature of an operator contains the input tensor's shape and type,
output tensor's shape and type, operator's name and its version. It has
the following schema:
INPUT:input_1_shape::input_1_type::input_2_shape::input_2_type::..
::OUTPUT:output_1_shape::output_1_type::output_2_shape::output_2_type::
..::NAME:operator_name ::VERSION:operator_version
An example of an operator signature is:
INPUT:[1,73,73,160]::float::[64,1,1,160]::float::[64]::float::
OUTPUT:[1,73,73,64]::float::NAME:Conv::VERSION:1
Returns:
A string denoting the input tensors' type. In the form of shape/type
separated
by comma. For example:
shape:[1,73,73,160],type:float,shape:[64,1,1,160],type:float,shape:[64],
type:float |
912 | 910 | get_operator_type | tensorflow/tensorflow/lite/toco/logging/gen_html.py | 78 | function | |
913 | 911 | HTMLGenerator | tensorflow/tensorflow/lite/toco/logging/gen_html.py | 87 | class | Utility class to generate an HTML report. |
914 | 912 | gen_conversion_log_html | tensorflow/tensorflow/lite/toco/logging/gen_html.py | 208 | function | Generates an HTML report about the conversion process.
Args:
conversion_log_dir: A string specifying the file directory of the conversion
logs. It's required that before calling this function, the
`conversion_log_dir`
already contains the following files: `toco_log_before.pb`,
`toco_log_after.pb`, `toco_tf_graph.dot`,
`toco_tflite_graph.dot`.
quantization_enabled: A boolean, passed from the tflite converter to
indicate whether post-training quantization is enabled during conversion.
tflite_graph_path: A string, the filepath to the converted TFLite model.
Raises:
IOError: When any of the required files doesn't exist. |
915 | 913 | GenHtmlTest | tensorflow/tensorflow/lite/toco/logging/gen_html_test.py | 32 | class | |
916 | 914 | execute | tensorflow/tensorflow/lite/toco/python/toco_from_protos.py | 32 | function | Runs the converter. |
917 | 915 | main | tensorflow/tensorflow/lite/toco/python/toco_from_protos.py | 61 | function | |
918 | 916 | TensorName | tensorflow/tensorflow/lite/toco/python/toco_from_protos_test.py | 30 | function | Get the canonical (non foo:0 name). |
919 | 917 | TocoFromProtosTest | tensorflow/tensorflow/lite/toco/python/toco_from_protos_test.py | 35 | class | |
920 | 918 | get_image | tensorflow/tensorflow/lite/tools/convert_image_to_csv.py | 41 | function | Returns an image loaded into an np.ndarray with dims [height, width, (3 or 1)].
Args:
width: Width to rescale the image to.
height: Height to rescale the image to.
want_grayscale: Whether the result should be converted to grayscale.
filepath: Path of the image file..
Returns:
np.ndarray of shape (height, width, channels) where channels is 1 if
want_grayscale is true, otherwise 3. |
921 | 919 | array_to_int_csv | tensorflow/tensorflow/lite/tools/convert_image_to_csv.py | 65 | function | Converts all elements in a numerical array to a comma-separated string.
Args:
array_data: Numerical array to convert.
Returns:
String containing array values as integers, separated by commas. |
922 | 920 | run_main | tensorflow/tensorflow/lite/tools/convert_image_to_csv.py | 79 | function | Application run loop. |
923 | 921 | main | tensorflow/tensorflow/lite/tools/convert_image_to_csv.py | 110 | function | |
924 | 922 | ConvertImageToCsvTest | tensorflow/tensorflow/lite/tools/convert_image_to_csv_test.py | 34 | class | |
925 | 923 | convert_bytearray_to_object | tensorflow/tensorflow/lite/tools/flatbuffer_utils.py | 38 | function | Converts a tflite model from a bytearray to an object for parsing. |
926 | 924 | read_model | tensorflow/tensorflow/lite/tools/flatbuffer_utils.py | 44 | function | Reads a tflite model as a python object.
Args:
input_tflite_file: Full path name to the input tflite file
Raises:
RuntimeError: If input_tflite_file path is invalid.
IOError: If input_tflite_file cannot be opened.
Returns:
A python object corresponding to the input tflite file. |
927 | 925 | read_model_with_mutable_tensors | tensorflow/tensorflow/lite/tools/flatbuffer_utils.py | 64 | function | Reads a tflite model as a python object with mutable tensors.
Similar to read_model() with the addition that the returned object has
mutable tensors (read_model() returns an object with immutable tensors).
Args:
input_tflite_file: Full path name to the input tflite file
Raises:
RuntimeError: If input_tflite_file path is invalid.
IOError: If input_tflite_file cannot be opened.
Returns:
A mutable python object corresponding to the input tflite file. |
928 | 926 | convert_object_to_bytearray | tensorflow/tensorflow/lite/tools/flatbuffer_utils.py | 83 | function | Converts a tflite model from an object to a bytearray. |
929 | 927 | write_model | tensorflow/tensorflow/lite/tools/flatbuffer_utils.py | 93 | function | Writes the tflite model, a python object, into the output file.
Args:
model_object: A tflite model as a python object
output_tflite_file: Full path name to the output tflite file.
Raises:
IOError: If output_tflite_file path is invalid or cannot be opened. |
930 | 928 | strip_strings | tensorflow/tensorflow/lite/tools/flatbuffer_utils.py | 108 | function | Strips all nonessential strings from the model to reduce model size.
We remove the following strings:
(find strings by searching ":string" in the tensorflow lite flatbuffer schema)
1. Model description
2. SubGraph name
3. Tensor names
We retain OperatorCode custom_code and Metadata name.
Args:
model: The model from which to remove nonessential strings. |
931 | 929 | randomize_weights | tensorflow/tensorflow/lite/tools/flatbuffer_utils.py | 130 | function | Randomize weights in a model.
Args:
model: The model in which to randomize weights.
random_seed: The input to the random number generator (default value is 0). |
932 | 930 | WriteReadModelTest | tensorflow/tensorflow/lite/tools/flatbuffer_utils_test.py | 29 | class | |
933 | 931 | StripStringsTest | tensorflow/tensorflow/lite/tools/flatbuffer_utils_test.py | 74 | class | |
934 | 932 | RandomizeWeightsTest | tensorflow/tensorflow/lite/tools/flatbuffer_utils_test.py | 119 | class | |
935 | 933 | main | tensorflow/tensorflow/lite/tools/randomize_weights.py | 34 | function | |
936 | 934 | main | tensorflow/tensorflow/lite/tools/strip_strings.py | 34 | function | Application run loop. |
937 | 935 | build_mock_flatbuffer_model | tensorflow/tensorflow/lite/tools/test_utils.py | 30 | function | Creates a flatbuffer containing an example model. |
938 | 936 | load_model_from_flatbuffer | tensorflow/tensorflow/lite/tools/test_utils.py | 211 | function | Loads a model as a python object from a flatbuffer model. |
939 | 937 | build_mock_model | tensorflow/tensorflow/lite/tools/test_utils.py | 218 | function | Creates an object containing an example model. |
940 | 938 | TensorTypeToName | tensorflow/tensorflow/lite/tools/visualize.py | 202 | function | Converts a numerical enum to a readable tensor type. |
941 | 939 | BuiltinCodeToName | tensorflow/tensorflow/lite/tools/visualize.py | 210 | function | Converts a builtin op code enum to a readable name. |
942 | 940 | NameListToString | tensorflow/tensorflow/lite/tools/visualize.py | 218 | function | Converts a list of integers to the equivalent ASCII string. |
943 | 941 | OpCodeMapper | tensorflow/tensorflow/lite/tools/visualize.py | 229 | class | Maps an opcode index to an op name. |
944 | 942 | DataSizeMapper | tensorflow/tensorflow/lite/tools/visualize.py | 245 | class | For buffers, report the number of bytes. |
945 | 943 | TensorMapper | tensorflow/tensorflow/lite/tools/visualize.py | 255 | class | Maps a list of tensor indices to a tooltip hoverable indicator of more. |
946 | 944 | GenerateGraph | tensorflow/tensorflow/lite/tools/visualize.py | 278 | function | Produces the HTML required to have a d3 visualization of the dag. |
947 | 945 | GenerateTableHtml | tensorflow/tensorflow/lite/tools/visualize.py | 337 | function | Given a list of object values and keys to print, make an HTML table.
Args:
items: Items to print an array of dicts.
keys_to_print: (key, display_fn). `key` is a key in the object. i.e.
items[0][key] should exist. display_fn is the mapping function on display.
i.e. the displayed html cell will have the string returned by
`mapping_fn(items[0][key])`.
display_index: add a column which is the index of each row in `items`.
Returns:
An html table. |
948 | 946 | CamelCaseToSnakeCase | tensorflow/tensorflow/lite/tools/visualize.py | 375 | function | Converts an identifier in CamelCase to snake_case. |
949 | 947 | FlatbufferToDict | tensorflow/tensorflow/lite/tools/visualize.py | 381 | function | Converts a hierarchy of FB objects into a nested dict.
We avoid transforming big parts of the flat buffer into python arrays. This
speeds conversion from ten minutes to a few seconds on big graphs.
Args:
fb: a flat buffer structure. (i.e. ModelT)
preserve_as_numpy: true if all downstream np.arrays should be preserved.
false if all downstream np.array should become python arrays
Returns:
A dictionary representing the flatbuffer rather than a flatbuffer object. |
950 | 948 | CreateDictFromFlatbuffer | tensorflow/tensorflow/lite/tools/visualize.py | 413 | function | |
951 | 949 | CreateHtmlFile | tensorflow/tensorflow/lite/tools/visualize.py | 419 | function | Given a tflite model in `tflite_input` file, produce html description. |
952 | 950 | main | tensorflow/tensorflow/lite/tools/visualize.py | 506 | function | |
953 | 951 | VisualizeTest | tensorflow/tensorflow/lite/tools/visualize_test.py | 29 | class | |
954 | 952 | main | tensorflow/tensorflow/lite/tools/zip_files.py | 32 | function | |
955 | 953 | _get_ground_truth_detections | tensorflow/tensorflow/lite/tools/evaluation/tasks/coco_object_detection/preprocess_coco_minival.py | 44 | function | Processes the annotations JSON file and returns ground truth data corresponding to allowlisted image IDs.
Args:
instances_file: COCO instances JSON file, usually named as
instances_val20xx.json.
allowlist_file: File containing COCO minival image IDs to allowlist for
evaluation, one per line.
num_images: Number of allowlisted images to pre-process. First num_images
are chosen based on sorted list of filenames. If None, all allowlisted
files are preprocessed.
Returns:
A dict mapping image id (int) to a per-image dict that contains:
'filename', 'image' & 'height' mapped to filename & image dimensions
respectively
AND
'detections' to a list of detection dicts, with each mapping:
'category_id' to COCO category id (starting with 1) &
'bbox' to a list of dimension-normalized [top, left, bottom, right]
bounding-box values. |
956 | 954 | _dump_data | tensorflow/tensorflow/lite/tools/evaluation/tasks/coco_object_detection/preprocess_coco_minival.py | 145 | function | Dumps images & data from ground-truth objects into output_folder_path.
The following are created in output_folder_path:
images/: sub-folder for allowlisted validation images.
ground_truth.pb: A binary proto file containing all ground-truth
object-sets.
Args:
ground_truth_detections: A dict mapping image id to ground truth data.
Output of _get_ground_truth_detections.
images_folder_path: Validation images folder
output_folder_path: folder to output files to. |
957 | 955 | _parse_args | tensorflow/tensorflow/lite/tools/evaluation/tasks/coco_object_detection/preprocess_coco_minival.py | 190 | function | Creates a parser that parse the command line arguments.
Returns:
A namespace parsed from command line arguments. |
958 | 956 | _synset_to_word | tensorflow/tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification/generate_validation_labels.py | 30 | function | Returns synset to word dictionary by reading sysnset arrays. |
959 | 957 | _validation_file_path | tensorflow/tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification/generate_validation_labels.py | 50 | function | |
960 | 958 | _synset_array_path | tensorflow/tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification/generate_validation_labels.py | 54 | function | |
961 | 959 | _generate_validation_labels | tensorflow/tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification/generate_validation_labels.py | 58 | function | |
962 | 960 | _check_arguments | tensorflow/tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification/generate_validation_labels.py | 67 | function | |
963 | 961 | main | tensorflow/tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification/generate_validation_labels.py | 80 | function | |
964 | 962 | main | tensorflow/tensorflow/lite/tools/optimize/python/modify_model_interface.py | 38 | function | Application run loop. |
965 | 963 | _parse_type_to_int | tensorflow/tensorflow/lite/tools/optimize/python/modify_model_interface_lib.py | 27 | function | Converts a tflite type to it's integer representation.
Args:
dtype: tf.DType representing the inference type.
flag: str representing the flag name.
Returns:
integer, a tflite TensorType enum value.
Raises:
ValueError: Unsupported tflite type. |
966 | 964 | modify_model_interface | tensorflow/tensorflow/lite/tools/optimize/python/modify_model_interface_lib.py | 52 | function | Modify a quantized model's interface (input/output) from float to integer.
Args:
input_file: Full path name to the input tflite file.
output_file: Full path name to the output tflite file.
input_type: Final input interface type.
output_type: Final output interface type.
Raises:
RuntimeError: If the modification of the model interface was unsuccessful.
ValueError: If the input_type or output_type is unsupported. |
967 | 965 | build_tflite_model_with_full_integer_quantization | tensorflow/tensorflow/lite/tools/optimize/python/modify_model_interface_lib_test.py | 31 | function | |
968 | 966 | ModifyModelInterfaceTest | tensorflow/tensorflow/lite/tools/optimize/python/modify_model_interface_lib_test.py | 56 | class | |
969 | 967 | FormatConverterTest | tensorflow/tensorflow/lite/tools/optimize/sparsity/python/format_converter_extension_test.py | 28 | class | |
970 | 968 | get_build_cpus | tensorflow/tensorflow/lite/tools/pip_package/setup.py | 72 | function | |
971 | 969 | make_args | tensorflow/tensorflow/lite/tools/pip_package/setup.py | 80 | function | Construct make command line. |
972 | 970 | make_output | tensorflow/tensorflow/lite/tools/pip_package/setup.py | 94 | function | Invoke make on the target and return output. |
973 | 971 | make | tensorflow/tensorflow/lite/tools/pip_package/setup.py | 99 | function | Invoke make to build tflite C++ sources.
Build dependencies:
apt-get install swig libjpeg-dev zlib1g-dev python3-dev python3-nump |
974 | 972 | download_dependencies | tensorflow/tensorflow/lite/tools/pip_package/setup.py | 108 | function | Download build dependencies if haven't done yet. |
975 | 973 | CustomBuildExt | tensorflow/tensorflow/lite/tools/pip_package/setup.py | 114 | class | Customized build extension. |
976 | 974 | CustomBuildPy | tensorflow/tensorflow/lite/tools/pip_package/setup.py | 130 | class | |
977 | 975 | get_pybind_include | tensorflow/tensorflow/lite/tools/pip_package/setup.py | 137 | function | pybind11 include directory is not correctly resolved.
This fixes include directory to /usr/local/pythonX.X
Returns:
include directories to find pybind11 |
978 | 976 | set_signature_defs | tensorflow/tensorflow/lite/tools/signature/signature_def_utils.py | 25 | function | Sets SignatureDefs to the Metadata of a TfLite flatbuffer buffer.
Args:
tflite_model: Binary TFLite model (bytes or bytes-like object) to which to
add signature_def.
signature_def_map: dict containing SignatureDefs to store in metadata.
Returns:
buffer: A TFLite model binary identical to model buffer with
metadata field containing SignatureDef.
Raises:
ValueError:
tflite_model buffer does not contain a valid TFLite model.
signature_def_map is empty or does not contain a SignatureDef. |
979 | 977 | get_signature_defs | tensorflow/tensorflow/lite/tools/signature/signature_def_utils.py | 51 | function | Get SignatureDef dict from the Metadata of a TfLite flatbuffer buffer.
Args:
tflite_model: TFLite model buffer to get the signature_def.
Returns:
dict containing serving names to SignatureDefs if exists, otherwise, empty
dict.
Raises:
ValueError:
tflite_model buffer does not contain a valid TFLite model.
DecodeError:
SignatureDef cannot be parsed from TfLite SignatureDef metadata. |
980 | 978 | clear_signature_defs | tensorflow/tensorflow/lite/tools/signature/signature_def_utils.py | 78 | function | Clears SignatureDefs from the Metadata of a TfLite flatbuffer buffer.
Args:
tflite_model: TFLite model buffer to remove signature_defs.
Returns:
buffer: A TFLite model binary identical to model buffer with
no SignatureDef metadata.
Raises:
ValueError:
tflite_model buffer does not contain a valid TFLite model. |
981 | 979 | SignatureDefUtilsTest | tensorflow/tensorflow/lite/tools/signature/signature_def_utils_test.py | 30 | class | |
982 | 980 | read32 | tensorflow/tensorflow/lite/tutorials/dataset.py | 35 | function | Read 4 bytes from bytestream as an unsigned 32-bit integer. |
983 | 981 | check_image_file_header | tensorflow/tensorflow/lite/tutorials/dataset.py | 41 | function | Validate that filename corresponds to images for the MNIST dataset. |
984 | 982 | check_labels_file_header | tensorflow/tensorflow/lite/tutorials/dataset.py | 57 | function | Validate that filename corresponds to labels for the MNIST dataset. |
985 | 983 | download | tensorflow/tensorflow/lite/tutorials/dataset.py | 67 | function | Download (and unzip) a file from the MNIST dataset if not already done. |
986 | 984 | dataset | tensorflow/tensorflow/lite/tutorials/dataset.py | 86 | function | Download and parse MNIST dataset. |
987 | 985 | train | tensorflow/tensorflow/lite/tutorials/dataset.py | 114 | function | tf.data.Dataset object for MNIST training data. |
988 | 986 | test | tensorflow/tensorflow/lite/tutorials/dataset.py | 120 | function | tf.data.Dataset object for MNIST test data. |
989 | 987 | test_image_generator | tensorflow/tensorflow/lite/tutorials/mnist_tflite.py | 35 | function | |
990 | 988 | run_eval | tensorflow/tensorflow/lite/tutorials/mnist_tflite.py | 47 | function | Performs evaluation for input image over specified model.
Args:
interpreter: TFLite interpreter initialized with model to execute.
input_image: Image input to the model.
Returns:
output: output tensor of model being executed. |
991 | 989 | main | tensorflow/tensorflow/lite/tutorials/mnist_tflite.py | 72 | function | |
992 | 990 | set_dlopen_flags | tensorflow/tensorflow/python/pywrap_dlopen_global_flags.py | 43 | function | |
993 | 991 | reset_dlopen_flags | tensorflow/tensorflow/python/pywrap_dlopen_global_flags.py | 48 | function | |
994 | 992 | import_graphdef | tensorflow/tensorflow/python/pywrap_mlir.py | 26 | function | |
995 | 993 | experimental_convert_saved_model_to_mlir | tensorflow/tensorflow/python/pywrap_mlir.py | 32 | function | |
996 | 994 | experimental_convert_saved_model_v1_to_mlir | tensorflow/tensorflow/python/pywrap_mlir.py | 39 | function | |
997 | 995 | experimental_run_pass_pipeline | tensorflow/tensorflow/python/pywrap_mlir.py | 48 | function | |
998 | 996 | enable | tensorflow/tensorflow/python/tf2.py | 30 | function | |
999 | 997 | disable | tensorflow/tensorflow/python/tf2.py | 36 | function | |
1000 | 998 | enabled | tensorflow/tensorflow/python/tf2.py | 42 | function | |
1001 | 999 | AssertTransformer | tensorflow/tensorflow/python/autograph/converters/asserts.py | 27 | class | Transforms Assert nodes to Call so they can be handled as functions. |
1002 | 1000 | transform | tensorflow/tensorflow/python/autograph/converters/asserts.py | 50 | function | |
1003 | 1001 | AssertsTest | tensorflow/tensorflow/python/autograph/converters/asserts_test.py | 30 | class | |
1004 | 1002 | _Break | tensorflow/tensorflow/python/autograph/converters/break_statements.py | 29 | class | |
1005 | 1003 | BreakTransformer | tensorflow/tensorflow/python/autograph/converters/break_statements.py | 39 | class | Canonicalizes break statements into additional conditionals. |
1006 | 1004 | transform | tensorflow/tensorflow/python/autograph/converters/break_statements.py | 183 | function | |
1007 | 1005 | BreakCanonicalizationTest | tensorflow/tensorflow/python/autograph/converters/break_statements_test.py | 27 | class | |
1008 | 1006 | _Function | tensorflow/tensorflow/python/autograph/converters/call_trees.py | 40 | class | |
1009 | 1007 | _ArgTemplateBuilder | tensorflow/tensorflow/python/autograph/converters/call_trees.py | 51 | class | Constructs a tuple representing the positional arguments in a call.
Example (yes, it's legal Python 3):
f(*args1, b, *args2, c, d) -> args1 + (b,) + args2 + (c, d) |
1010 | 1008 | CallTreeTransformer | tensorflow/tensorflow/python/autograph/converters/call_trees.py | 96 | class | Transforms the call tree by renaming transformed symbols. |
1011 | 1009 | transform | tensorflow/tensorflow/python/autograph/converters/call_trees.py | 211 | function | Transform function call to the compiled counterparts.
Args:
node: AST
ctx: EntityContext
Returns:
A tuple (node, new_names):
node: The transformed AST
new_names: set(string), containing any newly-generated names |
1012 | 1010 | MockConvertedCall | tensorflow/tensorflow/python/autograph/converters/call_trees_test.py | 30 | class | |
1013 | 1011 | CallTreesTest | tensorflow/tensorflow/python/autograph/converters/call_trees_test.py | 42 | class | |
1014 | 1012 | ConditionalExpressionTransformer | tensorflow/tensorflow/python/autograph/converters/conditional_expressions.py | 28 | class | Converts conditional expressions to functional form. |
1015 | 1013 | transform | tensorflow/tensorflow/python/autograph/converters/conditional_expressions.py | 48 | function | |
1016 | 1014 | ConditionalExpressionsTest | tensorflow/tensorflow/python/autograph/converters/conditional_expressions_test.py | 26 | class | |
1017 | 1015 | _Continue | tensorflow/tensorflow/python/autograph/converters/continue_statements.py | 29 | class | |
1018 | 1016 | _Block | tensorflow/tensorflow/python/autograph/converters/continue_statements.py | 40 | class | Tracks information about lexical blocks as they are visited in the AST.
Mainly, this object tracks the creation of block guards that replace
`continue` statements (e.g. `if not continue_:`).
Attributes:
create_guard_current: bool, whether to create a guard for the current
statement.
create_guard_next: bool, whether to create a guard for the next
statement.
is_loop_type: bool, whether this block is the body of a loop. |
1019 | 1017 | ContinueCanonicalizationTransformer | tensorflow/tensorflow/python/autograph/converters/continue_statements.py | 60 | class | Canonicalizes continue statements into additional conditionals. |
1020 | 1018 | transform | tensorflow/tensorflow/python/autograph/converters/continue_statements.py | 163 | function | |
1021 | 1019 | ContinueCanonicalizationTest | tensorflow/tensorflow/python/autograph/converters/continue_statements_test.py | 27 | class | |
1022 | 1020 | _Function | tensorflow/tensorflow/python/autograph/converters/control_flow.py | 41 | class | |
1023 | 1021 | ControlFlowTransformer | tensorflow/tensorflow/python/autograph/converters/control_flow.py | 46 | class | Transforms control flow structures like loops an conditionals. |
1024 | 1022 | AnnotatedDef | tensorflow/tensorflow/python/autograph/converters/control_flow.py | 395 | class | |
1025 | 1023 | transform | tensorflow/tensorflow/python/autograph/converters/control_flow.py | 402 | function | |
1026 | 1024 | ControlFlowTransformer | tensorflow/tensorflow/python/autograph/converters/control_flow_deprecated_py2.py | 44 | class | Transforms control flow structures like loops an conditionals. |
1027 | 1025 | AnnotatedDef | tensorflow/tensorflow/python/autograph/converters/control_flow_deprecated_py2.py | 623 | class | |
1028 | 1026 | transform | tensorflow/tensorflow/python/autograph/converters/control_flow_deprecated_py2.py | 630 | function | |
1029 | 1027 | ControlFlowTestBase | tensorflow/tensorflow/python/autograph/converters/control_flow_test.py | 43 | class | |
1030 | 1028 | NestedControlFlowTest | tensorflow/tensorflow/python/autograph/converters/control_flow_test.py | 59 | class | |
1031 | 1029 | WhileStatementTest | tensorflow/tensorflow/python/autograph/converters/control_flow_test.py | 106 | class | |
1032 | 1030 | IfStatementTest | tensorflow/tensorflow/python/autograph/converters/control_flow_test.py | 352 | class | |
1033 | 1031 | ForStatementTest | tensorflow/tensorflow/python/autograph/converters/control_flow_test.py | 598 | class | |
1034 | 1032 | AdvancedControlFlowTest | tensorflow/tensorflow/python/autograph/converters/control_flow_test.py | 688 | class | |
1035 | 1033 | _LoopScope | tensorflow/tensorflow/python/autograph/converters/directives.py | 48 | class | |
1036 | 1034 | _map_args | tensorflow/tensorflow/python/autograph/converters/directives.py | 55 | function | Maps AST call nodes to the actual function's arguments.
Args:
call_node: ast.Call
function: Callable[..., Any], the actual function matching call_node
Returns:
Dict[Text, ast.AST], mapping each of the function's argument names to
the respective AST node.
Raises:
ValueError: if the default arguments are not correctly set |
1037 | 1035 | DirectivesTransformer | tensorflow/tensorflow/python/autograph/converters/directives.py | 90 | class | Parses compiler directives and converts them into AST annotations. |
1038 | 1036 | transform | tensorflow/tensorflow/python/autograph/converters/directives.py | 180 | function | |
1039 | 1037 | DirectivesTest | tensorflow/tensorflow/python/autograph/converters/directives_test.py | 28 | class | |
1040 | 1038 | _Function | tensorflow/tensorflow/python/autograph/converters/functions.py | 32 | class | |
1041 | 1039 | FunctionTransformer | tensorflow/tensorflow/python/autograph/converters/functions.py | 38 | class | Wraps function bodies around autograph-specific boilerplate. |
1042 | 1040 | transform | tensorflow/tensorflow/python/autograph/converters/functions.py | 134 | function | |
1043 | 1041 | FunctionTransformer | tensorflow/tensorflow/python/autograph/converters/functions_test.py | 31 | class | |
1044 | 1042 | ListCompTransformer | tensorflow/tensorflow/python/autograph/converters/list_comprehensions.py | 42 | class | Lowers list comprehensions into standard control flow. |
1045 | 1043 | transform | tensorflow/tensorflow/python/autograph/converters/list_comprehensions.py | 81 | function | |
1046 | 1044 | ListCompTest | tensorflow/tensorflow/python/autograph/converters/list_comprehensions_test.py | 26 | class | |
1047 | 1045 | _Statement | tensorflow/tensorflow/python/autograph/converters/lists.py | 45 | class | |
1048 | 1046 | ListTransformer | tensorflow/tensorflow/python/autograph/converters/lists.py | 51 | class | Converts lists and related operations to their TF counterpart. |
1049 | 1047 | transform | tensorflow/tensorflow/python/autograph/converters/lists.py | 239 | function | |
1050 | 1048 | ListTest | tensorflow/tensorflow/python/autograph/converters/lists_test.py | 33 | class | |
1051 | 1049 | LogicalExpressionTransformer | tensorflow/tensorflow/python/autograph/converters/logical_expressions.py | 49 | class | Converts logical expressions to corresponding TF calls. |
1052 | 1050 | transform | tensorflow/tensorflow/python/autograph/converters/logical_expressions.py | 135 | function | |
1053 | 1051 | LogicalExpressionTest | tensorflow/tensorflow/python/autograph/converters/logical_expressions_test.py | 28 | class | |
1054 | 1052 | _RewriteBlock | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 37 | class | |
1055 | 1053 | ConditionalReturnRewriter | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 43 | class | Rewrites a a pattern where it's unobvious that all paths return a value.
This rewrite allows avoiding intermediate None return values.
The following pattern:
if cond:
<block 1>
return
else:
<block 2>
<block 3>
is converted to:
if cond:
<block 1>
return
else:
<block 2>
<block 3>
and vice-versa (if the else returns, subsequent statements are moved under the
if branch). |
1056 | 1054 | _Block | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 159 | class | |
1057 | 1055 | _Function | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 172 | class | |
1058 | 1056 | ReturnStatementsTransformer | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 183 | class | Lowers return statements into variables and conditionals.
Specifically, the following pattern:
<block 1>
return val
<block 2>
is converted to:
do_return = False
retval = None
<block 1>
do_return = True
retval = val
if not do_return:
<block 2>
return retval
The conversion adjusts loops as well:
<block 1>
while cond:
<block 2>
return retval
is converted to:
<block 1>
while not do_return and cond:
<block 2>
do_return = True
retval = val |
1059 | 1057 | transform | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 392 | function | Ensure a function has only a single return, at the end. |
1060 | 1058 | SingleReturnTest | tensorflow/tensorflow/python/autograph/converters/return_statements_test.py | 28 | class | |
1061 | 1059 | SliceTransformer | tensorflow/tensorflow/python/autograph/converters/slices.py | 28 | class | Converts slicing operations to their TF counterpart.
Currently, relying on the default slice operator that Tensor uses is
insufficient, because TensorArray and tensor lists use dedicated index read
and write functions. |
1062 | 1060 | transform | tensorflow/tensorflow/python/autograph/converters/slices.py | 84 | function | |
1063 | 1061 | SliceTest | tensorflow/tensorflow/python/autograph/converters/slices_test.py | 31 | class | |
1064 | 1062 | VariableAccessTransformer | tensorflow/tensorflow/python/autograph/converters/variables.py | 28 | class | Rewrites basic symbol reads.
This transformer rewrites variable reads with a "read" operator which allows
tracking activity.
Example:
For a basic statement:
a = b + c
This is translated to:
a = ld(b) + ld(c)
Augmented assignment operations also introduce a `ld` operator:
a += b
The assignment target also receives an operator to properly represent the
read:
a = ld(a)
a += ld(b) |
1065 | 1063 | transform | tensorflow/tensorflow/python/autograph/converters/variables.py | 100 | function | |
1066 | 1064 | VariablesTest | tensorflow/tensorflow/python/autograph/converters/variables_test.py | 26 | class | |
1067 | 1065 | _control_ctx | tensorflow/tensorflow/python/autograph/core/ag_ctx.py | 29 | function | |
1068 | 1066 | control_status_ctx | tensorflow/tensorflow/python/autograph/core/ag_ctx.py | 35 | function | |
1069 | 1067 | Status | tensorflow/tensorflow/python/autograph/core/ag_ctx.py | 40 | class | |
1070 | 1068 | ControlStatusCtx | tensorflow/tensorflow/python/autograph/core/ag_ctx.py | 46 | class | A context that tracks whether autograph is enabled by the user. |
1071 | 1069 | NullCtx | tensorflow/tensorflow/python/autograph/core/ag_ctx.py | 66 | class | Helper substitute for contextlib.nullcontext. |
1072 | 1070 | _default_control_status_ctx | tensorflow/tensorflow/python/autograph/core/ag_ctx.py | 76 | function | |
1073 | 1071 | Rule | tensorflow/tensorflow/python/autograph/core/config_lib.py | 27 | class | Base class for conversion rules. |
1074 | 1072 | Action | tensorflow/tensorflow/python/autograph/core/config_lib.py | 38 | class | |
1075 | 1073 | DoNotConvert | tensorflow/tensorflow/python/autograph/core/config_lib.py | 44 | class | Indicates that this module should be not converted. |
1076 | 1074 | Convert | tensorflow/tensorflow/python/autograph/core/config_lib.py | 56 | class | Indicates that this module should be converted. |
1077 | 1075 | Feature | tensorflow/tensorflow/python/autograph/core/converter.py | 83 | class | This enumeration represents optional conversion options.
These conversion options are experimental. They are subject to change without
notice and offer no guarantees.
_Example Usage_
```python
optionals= tf.autograph.experimental.Feature.EQUALITY_OPERATORS
@tf.function(experimental_autograph_options=optionals)
def f(i):
if i == 0: # EQUALITY_OPERATORS allows the use of == here.
tf.print('i is zero')
```
Attributes:
ALL: Enable all features.
AUTO_CONTROL_DEPS: Insert of control dependencies in the generated code.
ASSERT_STATEMENTS: Convert Tensor-dependent assert statements to tf.Assert.
BUILTIN_FUNCTIONS: Convert builtin functions applied to Tensors to
their TF counterparts.
EQUALITY_OPERATORS: Whether to convert the comparison operators, like
equality. This is soon to be deprecated as support is being added to the
Tensor class.
LISTS: Convert list idioms, like initializers, slices, append, etc.
NAME_SCOPES: Insert name scopes that name ops according to context, like the
function they were defined in. |
1078 | 1076 | ConversionOptions | tensorflow/tensorflow/python/autograph/core/converter.py | 138 | class | Immutable container for global conversion flags.
Attributes:
recursive: bool, whether to recursively convert any user functions or
classes that the converted function may use.
user_requested: bool, whether the conversion was explicitly requested by
the user, as opposed to being performed as a result of other logic. This
value always auto-resets resets to False in child conversions.
optional_features: Union[Feature, Set[Feature]], controls the use of
optional features in the conversion process. See Feature for available
options. |
1079 | 1077 | ProgramContext | tensorflow/tensorflow/python/autograph/core/converter.py | 236 | class | ProgramContext keeps track of converting function hierarchies.
Attributes:
options: ConversionOptions
autograph_module: Deprecated. Do not use. |
1080 | 1078 | Base | tensorflow/tensorflow/python/autograph/core/converter.py | 249 | class | All converters should inherit from this class.
Attributes:
ctx: EntityContext |
1081 | 1079 | TestConverter | tensorflow/tensorflow/python/autograph/core/converter_test.py | 32 | class | |
1082 | 1080 | ConversionOptionsTest | tensorflow/tensorflow/python/autograph/core/converter_test.py | 36 | class | |
1083 | 1081 | ConverterBaseTest | tensorflow/tensorflow/python/autograph/core/converter_test.py | 64 | class | |
1084 | 1082 | allowlist | tensorflow/tensorflow/python/autograph/core/converter_testing.py | 35 | function | Helper that marks a callable as whtelitisted. |
1085 | 1083 | is_inside_generated_code | tensorflow/tensorflow/python/autograph/core/converter_testing.py | 47 | function | Tests whether the caller is generated code. Implementation-specific. |
1086 | 1084 | TestingTranspiler | tensorflow/tensorflow/python/autograph/core/converter_testing.py | 66 | class | Testing version that only applies given transformations. |
1087 | 1085 | TestCase | tensorflow/tensorflow/python/autograph/core/converter_testing.py | 98 | class | Base class for unit tests in this module. Contains relevant utilities. |
1088 | 1086 | FunctionScope | tensorflow/tensorflow/python/autograph/core/function_wrappers.py | 33 | class | Context manager that wraps the body of a converted function.
This context manager handles various operations related to the scope of a
function:
* optional TF name scopes - these name scopes match the name of the
function, for easy visualization in tensorBoard;
* optional automatic control dependencies - this adds the same mechanism
for control dependencies that is used by `@tf.function`; it can be
optionally enabled when using `tf.autograph.to_graph`;
* tracking of autograph conversion state (whether it's enabled by the user,
conversion options; |
1089 | 1087 | with_function_scope | tensorflow/tensorflow/python/autograph/core/function_wrappers.py | 114 | function | Inline version of the FunctionScope context manager. |
1090 | 1088 | FunctionWrappersTest | tensorflow/tensorflow/python/autograph/core/function_wrappers_test.py | 29 | class | |
1091 | 1089 | UnsupportedFeaturesChecker | tensorflow/tensorflow/python/autograph/core/unsupported_features_checker.py | 26 | class | Quick check for Python features we know we don't support.
Any features detected will cause AutoGraph to not compile a function. |
1092 | 1090 | verify | tensorflow/tensorflow/python/autograph/core/unsupported_features_checker.py | 60 | function | |
1093 | 1091 | is_autograph_strict_conversion_mode | tensorflow/tensorflow/python/autograph/impl/api.py | 72 | function | |
1094 | 1092 | AutoGraphError | tensorflow/tensorflow/python/autograph/impl/api.py | 82 | class | Base class for all AutoGraph exceptions. |
1095 | 1093 | ConversionError | tensorflow/tensorflow/python/autograph/impl/api.py | 87 | class | Raised during the conversion process. |
1096 | 1094 | StagingError | tensorflow/tensorflow/python/autograph/impl/api.py | 92 | class | Raised during the staging (i.e. Python execution) of converted code. |
1097 | 1095 | _ErrorMetadata | tensorflow/tensorflow/python/autograph/impl/api.py | 97 | class | AutoGraph-specific error metadata. See base class. |
1098 | 1096 | _attach_error_metadata | tensorflow/tensorflow/python/autograph/impl/api.py | 146 | function | Augments an error with the metadata necessary for rewrite. |
1099 | 1097 | StackTraceMapper | tensorflow/tensorflow/python/autograph/impl/api.py | 166 | class | Remaps generated code to code it originated from. |
1100 | 1098 | PyToTF | tensorflow/tensorflow/python/autograph/impl/api.py | 203 | class | The TensorFlow AutoGraph transformer. |
1101 | 1099 | _convert_actual | tensorflow/tensorflow/python/autograph/impl/api.py | 275 | function | Applies AutoGraph to entity. |
1102 | 1100 | autograph_artifact | tensorflow/tensorflow/python/autograph/impl/api.py | 298 | function | |
1103 | 1101 | is_autograph_artifact | tensorflow/tensorflow/python/autograph/impl/api.py | 303 | function | |
1104 | 1102 | converted_call | tensorflow/tensorflow/python/autograph/impl/api.py | 307 | function | Converts a function call inline.
For internal use only.
Note: The argument list is optimized for readability of generated code, which
may look like this:
ag__.converted_call(f, (arg1, arg2), None, fscope)
ag__.converted_call(f, (), dict(arg1=val1, **kwargs), fscope)
ag__.converted_call(f, (arg1, arg2) + varargs, dict(**kwargs), lscope)
Args:
f: The function to convert.
args: Tuple, the original positional arguments of f
kwargs: Optional[Dict], the original keyword arguments of f
caller_fn_scope: Optional[function_wrappers.FunctionScope], the function
scope of the converted function in which this call was originally made.
options: Optional[converter.ConversionOptions], conversion options. If not
specified, the value of caller_fn_scope.callopts is used. Either options
or caller_fn_scope must be present.
Returns:
Any, the result of executing a possibly-converted `f` with the given
arguments. |
1105 | 1103 | _call_unconverted | tensorflow/tensorflow/python/autograph/impl/api.py | 466 | function | Calls the original function without converting with AutoGraph. |
1106 | 1104 | _fall_back_unconverted | tensorflow/tensorflow/python/autograph/impl/api.py | 479 | function | Falls back to calling the function unconverted, in case of error. |
1107 | 1105 | tf_convert | tensorflow/tensorflow/python/autograph/impl/api.py | 506 | function | Decorator that applies AutoGraph to a function.
Use in internal APIs.
This API is suitable for high order functions internal to the TensorFlow API,
and more generally any function to which Autograph is not applied.
Guidance: convert was a decorator meant for use directly by developers, and
will be soon deprecated in favor of tf.function. tf_convert is to be called
from high order functions internal to TF.
Args:
f: Callable.
ctx: ag_ctx.ControlStatusCtx, the Autograph context in which `f` is used.
convert_by_default: bool, whether to use AutoGraph when the context doesn't
specify.
user_requested: bool, whether to ignore the conversion allowlist. See
ConversionOptions.user_requested.
Returns:
Either `f or the converted version of `f`. |
1108 | 1106 | call_with_unspecified_conversion_status | tensorflow/tensorflow/python/autograph/impl/api.py | 565 | function | Decorator that resets the conversion context to the unspecified status. |
1109 | 1107 | _log_callargs | tensorflow/tensorflow/python/autograph/impl/api.py | 577 | function | Logging helper. |
1110 | 1108 | do_not_convert | tensorflow/tensorflow/python/autograph/impl/api.py | 599 | function | Decorator that suppresses the conversion of a function.
Args:
func: function to decorate.
Returns:
If `func` is not None, returns a `Callable` which is equivalent to
`func`, but is not converted by AutoGraph.
If `func` is None, returns a decorator that, when invoked with a
single `func` argument, returns a `Callable` equivalent to the
above case. |
1111 | 1109 | convert | tensorflow/tensorflow/python/autograph/impl/api.py | 626 | function | Decorator that compiles a function to use TensorFlow ops.
The decorator is dynamic - it recompiles the target whenever the decorated
function is called. This means the parameter values are known at conversion.
It also means that repeated calls with different types of parameters will be
correctly processed.
Args:
recursive: bool, whether to recursively convert any functions or classes
that the converted function may use.
optional_features: converted.Feature, allows toggling optional or
experimental features. When set to None, only the core features are
enabled.
user_requested: bool, whether this is a function that the user explicitly
asked to be converted. See ConversionOptions.user_requested.
conversion_ctx: Optional ag_ctx.ControlStatusCtx, the Autograph context in
which `f` is used.
Returns:
Callable, a decorator that converts the given function into an equivalent
function that uses TensorFlow ops. |
1112 | 1110 | to_graph | tensorflow/tensorflow/python/autograph/impl/api.py | 682 | function | Converts a Python entity into a TensorFlow graph.
Also see: `tf.autograph.to_code`, `tf.function`.
Unlike `tf.function`, `to_graph` is a low-level transpiler that converts
Python code to TensorFlow graph code. It does not implement any caching,
variable management or create any actual ops, and is best used where greater
control over the generated TensorFlow graph is desired. Another difference
from `tf.function` is that `to_graph` will not wrap the graph into a
TensorFlow function or a Python callable. Internally, `tf.function` uses
`to_graph`.
Example usage:
>>> def f(x):
... if x > 0:
... y = x * x
... else:
... y = -x
... return y
...
>>> converted_f = to_graph(f)
>>> x = tf.constant(2)
>>> converted_f(x) # converted_foo is like a TensorFlow Op.
<tf.Tensor: shape=(), dtype=int32, numpy=4>
Supported Python entities include:
* functions
* classes
* object methods
Functions are converted into new functions with converted code.
Classes are converted by generating a new class whose methods use converted
code.
Methods are converted into unbound function that have an additional first
argument called `self`.
For a tutorial, see the
[tf.function and AutoGraph guide](https://www.tensorflow.org/guide/function).
For more detailed information, see the
[AutoGraph reference documentation](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/index.md).
Args:
entity: Python callable or class to convert.
recursive: Whether to recursively convert any functions that the converted
function may call.
experimental_optional_features: `None`, a tuple of, or a single
`tf.autograph.experimental.Feature` value.
Returns:
Same as `entity`, the converted Python function or class.
Raises:
ValueError: If the entity could not be converted. |
1113 | 1111 | to_graph_v1 | tensorflow/tensorflow/python/autograph/impl/api.py | 754 | function | Converts a Python entity into a TensorFlow graph.
Also see: `tf.autograph.to_code`, `tf.function`.
Unlike `tf.function`, `to_graph` is a low-level transpiler that converts
Python code to TensorFlow graph code. It does not implement any caching,
variable management or create any actual ops, and is best used where greater
control over the generated TensorFlow graph is desired. Another difference
from `tf.function` is that `to_graph` will not wrap the graph into a
TensorFlow function or a Python callable. Internally, `tf.function` uses
`to_graph`.
_Example Usage_
```python
def foo(x):
if x > 0:
y = x * x
else:
y = -x
return y
converted_foo = to_graph(foo)
x = tf.constant(1)
y = converted_foo(x) # converted_foo is a TensorFlow Op-like.
assert is_tensor(y)
```
Supported Python entities include:
* functions
* classes
* object methods
Functions are converted into new functions with converted code.
Classes are converted by generating a new class whose methods use converted
code.
Methods are converted into unbound function that have an additional first
argument called `self`.
Args:
entity: Python callable or class to convert.
recursive: Whether to recursively convert any functions that the converted
function may call.
arg_values: Deprecated.
arg_types: Deprecated.
experimental_optional_features: `None`, a tuple of, or a single
`tf.autograph.experimental.Feature` value.
Returns:
Same as `entity`, the converted Python function or class.
Raises:
ValueError: If the entity could not be converted. |
1114 | 1112 | to_code_v1 | tensorflow/tensorflow/python/autograph/impl/api.py | 825 | function | Returns the source code generated by AutoGraph, as a string.
Example usage:
>>> def f(x):
... if x < 0:
... x = -x
... return x
>>> tf.autograph.to_code(f)
"...def tf__f(x):..."
Also see: `tf.autograph.to_graph`.
Note: If a function has been decorated with `tf.function`, pass its
underlying Python function, rather than the callable that `tf.function
creates:
>>> @tf.function
... def f(x):
... if x < 0:
... x = -x
... return x
>>> tf.autograph.to_code(f.python_function)
"...def tf__f(x):..."
Args:
entity: Python callable or class.
recursive: Whether to recursively convert any functions that the converted
function may call.
arg_values: Deprecated.
arg_types: Deprecated.
indentation: Deprecated.
experimental_optional_features: `None`, a tuple of, or a single
`tf.autograph.experimental.Feature` value.
Returns:
The converted code as string. |
1115 | 1113 | to_code | tensorflow/tensorflow/python/autograph/impl/api.py | 879 | function | Returns the source code generated by AutoGraph, as a string.
Example usage:
>>> def f(x):
... if x < 0:
... x = -x
... return x
>>> tf.autograph.to_code(f)
"...def tf__f(x):..."
Also see: `tf.autograph.to_graph`.
Note: If a function has been decorated with `tf.function`, pass its
underlying Python function, rather than the callable that `tf.function
creates:
>>> @tf.function
... def f(x):
... if x < 0:
... x = -x
... return x
>>> tf.autograph.to_code(f.python_function)
"...def tf__f(x):..."
Args:
entity: Python callable or class to convert.
recursive: Whether to recursively convert any functions that the converted
function may call.
experimental_optional_features: `None`, a tuple of, or a single
`tf.autograph.experimental.Feature` value.
Returns:
The converted code as string. |
1116 | 1114 | TestResource | tensorflow/tensorflow/python/autograph/impl/api_test.py | 64 | class | |
1117 | 1115 | ApiTest | tensorflow/tensorflow/python/autograph/impl/api_test.py | 70 | class | |
1118 | 1116 | _is_of_known_loaded_module | tensorflow/tensorflow/python/autograph/impl/conversion.py | 37 | function | |
1119 | 1117 | _is_known_loaded_type | tensorflow/tensorflow/python/autograph/impl/conversion.py | 46 | function | Tests whether the function or method is an instance of a known type. |
1120 | 1118 | is_unsupported | tensorflow/tensorflow/python/autograph/impl/conversion.py | 73 | function | Checks whether an entity is supported by AutoGraph at all. |
1121 | 1119 | is_allowlisted | tensorflow/tensorflow/python/autograph/impl/conversion.py | 116 | function | Checks whether an entity is allowed for use in graph mode.
Examples of allowed entities include all members of the tensorflow
package.
Args:
o: A Python entity.
check_call_override: Reserved for internal use. When set to `False`, it
disables the rule according to which classes are allowed if their
__call__ method is allowed.
allow_namedtuple_subclass: Reserved for internal use. When `True`,
namedtuple subclasses are not allowed.
Returns:
Boolean |
1122 | 1120 | is_in_allowlist_cache | tensorflow/tensorflow/python/autograph/impl/conversion.py | 221 | function | |
1123 | 1121 | cache_allowlisted | tensorflow/tensorflow/python/autograph/impl/conversion.py | 229 | function | |
1124 | 1122 | ConversionTest | tensorflow/tensorflow/python/autograph/impl/conversion_test.py | 39 | class | |
1125 | 1123 | set_element_type | tensorflow/tensorflow/python/autograph/lang/directives.py | 33 | function | Indicates that the entity is expected hold items of specified type/shape.
The staged TensorFlow ops will reflect and assert this data type. Ignored
otherwise.
Args:
entity: The entity to annotate.
dtype: TensorFlow dtype value to assert for entity.
shape: Optional shape to assert for entity. |
1126 | 1124 | set_loop_options | tensorflow/tensorflow/python/autograph/lang/directives.py | 50 | function | Specifies additional arguments to be passed to the enclosing while_loop.
The parameters apply to and only to the immediately enclosing loop. It only
has effect if the loop is staged as a TF while_loop; otherwise the parameters
have no effect.
Usage:
>>> @tf.function(autograph=True)
... def f():
... n = 0
... for i in tf.range(10):
... tf.autograph.experimental.set_loop_options(maximum_iterations=3)
... n += 1
... return n
>>> @tf.function(autograph=True)
... def f():
... v = tf.constant((0,))
... for i in tf.range(3):
... tf.autograph.experimental.set_loop_options(
... shape_invariants=[(v, tf.TensorShape([None]))]
... )
... v = tf.concat((v, [i]), 0)
... return v
Also see tf.while_loop.
Args:
parallel_iterations: The maximum number of iterations allowed to run in
parallel at any given time. Note that this does not guarantee parallel
execution.
swap_memory: Whether to store intermediate values needed for
gradients on the CPU instead of GPU.
maximum_iterations: Allows limiting the total number of iterations executed
by the loop.
shape_invariants: Allows controlling the argument with the same name passed
to tf.while_loop. Unlike tf.while_loop, this is a list of
`(tensor, shape)` pairs. |
1127 | 1125 | _validate_list_constructor | tensorflow/tensorflow/python/autograph/lang/special_functions.py | 31 | function | Validates the inputs of tensor_list. |
1128 | 1126 | match_staging_level | tensorflow/tensorflow/python/autograph/lang/special_functions.py | 50 | function | Casts a value to be staged at the same level as another. |
1129 | 1127 | tensor_list | tensorflow/tensorflow/python/autograph/lang/special_functions.py | 57 | function | Creates an tensor list and populates it with the given elements.
This function provides a more uniform access to tensor lists and tensor
arrays, and allows optional initialization.
Note: this function is a simplified wrapper. If you need greater control,
it is recommended to use the underlying implementation directly.
Args:
elements: Iterable[tf.Tensor, ...], the elements to initially fill the list
with
element_dtype: Optional[tf.DType], data type for the elements in the list;
required if the list is empty
element_shape: Optional[tf.TensorShape], shape for the elements in the list;
required if the list is empty
use_tensor_array: bool, whether to use the more compatible but restrictive
tf.TensorArray implementation
Returns:
Union[tf.Tensor, tf.TensorArray], the new list.
Raises:
ValueError: for invalid arguments |
1130 | 1128 | stack | tensorflow/tensorflow/python/autograph/lang/special_functions.py | 92 | function | Stacks the input, if it admits the notion of stacking.
For example, a list of tensors can be stacked into a larger tensor. This
function is similar to tf.stack, but it accepts non-lists and lists of
non-tensors as arguments. In the latter case, the function does nothing.
Args:
list_or_tensor: Any
element_dtype: tf.DType, optional dtypedtype for the elements in the list.
Required if the input is stackable, and the list is untyped.
strict: bool, if True an error is raised if the input is not stackable.
Otherwise the function is a no-op.
Returns:
Any, if the input is stackable, the result will be a tf.Tensor. Otherwise,
if strict=False, the result will be list_or_tensor.
Raises:
ValueError: if strict=True and the input is not stackable. |
1131 | 1129 | SpecialFunctionsTest | tensorflow/tensorflow/python/autograph/lang/special_functions_test.py | 31 | class | |
1132 | 1130 | if_exp | tensorflow/tensorflow/python/autograph/operators/conditional_expressions.py | 27 | function | |
1133 | 1131 | _tf_if_exp | tensorflow/tensorflow/python/autograph/operators/conditional_expressions.py | 34 | function | Overload of if_exp that stages a TF cond. |
1134 | 1132 | _py_if_exp | tensorflow/tensorflow/python/autograph/operators/conditional_expressions.py | 55 | function | |
1135 | 1133 | _basic_expr | tensorflow/tensorflow/python/autograph/operators/conditional_expressions_test.py | 29 | function | |
1136 | 1134 | IfExpTest | tensorflow/tensorflow/python/autograph/operators/conditional_expressions_test.py | 38 | class | |
1137 | 1135 | _verify_loop_init_vars | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 102 | function | Ensures that all values in the state are defined when entering a loop. |
1138 | 1136 | _is_subshape | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 117 | function | Returns True if left shape is at least as specific as right shape. |
1139 | 1137 | _verify_single_loop_var | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 134 | function | Verifies whether the initial, entry and exit values are consistent. |
1140 | 1138 | _verify_tf_loop_vars | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 191 | function | Verifies loop variables for consistency. |
1141 | 1139 | verify_single_cond_var | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 233 | function | Verifies whether body_var and orelse_var are consistent. |
1142 | 1140 | _verify_tf_cond_branch_vars | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 263 | function | Verifies variables output by a conditional branch for consistency. |
1143 | 1141 | _verify_tf_cond_vars | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 276 | function | Verifies variables manipulated by a conditional for consistency. |
1144 | 1142 | for_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 291 | function | Functional form of a for statement.
The loop operates on a state, which includes all symbols that are
variant across loop iterations, excluding the variables local to the loop.
For example, given the loop below that calculates the geometric and
arithmetic means or some numbers:
```
geo_mean = 1
arith_mean = 0
for i in range(n):
a = numbers[i]
geo_mean *= a
arith_mean += a
```
The state is represented by the variables geo_mean and arith_mean. The
`extra_test`, `body`, `get_state` and `set_state` functions must bind to the
original `geo_mean` and `arith_mean` symbols, using `nonlocal`.
The inputs and outputs of the callables representing the loop blocks are not
explicit - instead, these functions must use nonlocal/global for side effects.
The inputs and outputs are instead controlled by the set_state/get_state
functions.
Args:
iter_: The entity being iterated over.
extra_test: Callable with boolean return type.
An additional loop condition.
body: Callable representing the actual loop body.
get_state: Additional callable which can capture additional state (such as
the values of composite symbols). This is only useful when staging the
loop.
set_state: Additional callable which save values captured by get_state back
into the Python environment. This is only useful when staging the loop.
symbol_names: Tuple containing names of the loop variables returned by
get_state.
opts: Optional dict of extra loop parameters.
Returns:
Tuple containing the final state. |
1145 | 1143 | _py_for_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 371 | function | Overload of for_stmt that executes a Python for loop. |
1146 | 1144 | _known_len_tf_for_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 400 | function | Overload of for_stmt that iterates over TF entities that admit a length. |
1147 | 1145 | _tf_ragged_for_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 447 | function | Overload of for_stmt that iterates over TF ragged tensors. |
1148 | 1146 | _tf_range_for_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 494 | function | Overload of for_stmt that iterates over a TF range (and elides it). |
1149 | 1147 | _tf_iterator_for_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 555 | function | Overload of for_stmt that iterates over TF Iterators. See for_loop. |
1150 | 1148 | _general_purpose_scan | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 615 | function | Variant of Dataset.scan with semantics of general-purpose computation. |
1151 | 1149 | _tf_dataset_for_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 629 | function | Overload of _dataset_for_stmt with early stopping. See for_stmt. |
1152 | 1150 | _tf_distributed_iterable_for_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 700 | function | Overload of for_stmt that iterates over TF distributed datasets. |
1153 | 1151 | while_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 727 | function | Functional form of a while statement.
The loop operates on a so-called state, which includes all symbols that are
variant across loop iterations. In what follows we refer to state as either
a tuple of entities that represent an actual state, or a list of arguments
of the corresponding types.
The inputs and outputs of the callables representing the loop blocks are not
explicit - instead, these functions must use nonlocal/global for side effects.
The inputs and outputs are instead controlled by the set_state/get_state
functions.
Args:
test: Callable with boolean return type. The loop condition.
body: Callable representing the actual loop body.
get_state: Additional callable which can capture additional state (such as
the values of composite symbols). This is only useful when staging the
loop.
set_state: Additional callable which save values captured by get_state back
into the Python environment. This is only useful when staging the loop.
symbol_names: Tuple containing the names of all loop variables.
opts: Optional dict of extra loop parameters.
Returns:
Tuple containing the final state. |
1154 | 1152 | _PythonLoopChecker | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 777 | class | Verifies Python loops for TF-specific limits. |
1155 | 1153 | _py_while_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 851 | function | Overload of while_stmt that executes a Python while loop. |
1156 | 1154 | _shape_invariants_mapping_to_positional_list | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 872 | function | |
1157 | 1155 | _tf_while_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 882 | function | Overload of while_stmt that stages a TF while_stmt. |
1158 | 1156 | if_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 915 | function | Functional form of an if statement.
The conditional operates on a state, which includes all symbols whose values
are a function of the branch taken.
For example, given the code below that calculates the abs function:
```
x = 1
if x > 0:
x = -x
```
The state is represented by the variable `x`. The `body, `orelse` and
`set_state` functions must bind to the original `x` symbol, using `nonlocal`.
The inputs and outputs of the callables representing the loop blocks are not
explicit - instead, these functions must use nonlocal/global for side effects.
The inputs and outputs are instead controlled by the set_state/get_state
functions.
Args:
cond: Boolean.
body: Callable representing the main block of the conditional.
orelse: Callable representing the else block of the conditional.
get_state: Function that returns a tuple containing the values of all
composite symbols modified within the conditional. This allows access to
state that branches may mutate through side effects. This function is not
needed and should not be called when dispatching to code matching Python's
default semantics. This is useful for checkpointing to avoid unintended
side-effects when staging requires evaluating all code-paths.
set_state: Function to set the values of all composite symbols modified
within the conditional. This is the complement to get_state, used to
restore checkpointed values. The single argument a tuple containing values
for each composite symbol that may be modified in a branch of the
conditional. The is usually the result of a call to get_state.
symbol_names: Tuple containing basic loop var names.
nouts: Number of variables output by the statement. Vars which are
not outputs will not be passed through staged control flow such as
tf.cond. This includes variables that are defined before the conditional,
but are not used after it. |
1159 | 1157 | _tf_if_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 965 | function | Overload of if_stmt that stages a TF cond. |
1160 | 1158 | _py_if_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 1011 | function | Overload of if_stmt that executes a Python if statement. |
1161 | 1159 | _disallow_undefs_into_loop | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 104 | function | Ensures that all values in the state are defined when entering a loop. |
1162 | 1160 | _is_subshape | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 120 | function | Returns True if left shape is at least as specific as right shape. |
1163 | 1161 | _verify_single_loop_var | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 137 | function | Verifies whether the initial, entry and exit values are consistent. |
1164 | 1162 | _verify_tf_loop_vars | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 191 | function | Verifies loop variables for consistency. |
1165 | 1163 | _verify_single_cond_var | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 223 | function | Verifies whether body_var and orelse_var are consistent. |
1166 | 1164 | _verify_tf_cond_vars | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 248 | function | Verifies variables manipulated by a conditional for consistency. |
1167 | 1165 | for_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 279 | function | Functional form of a for statement.
The loop operates on a state, which includes all symbols that are
variant across loop iterations, excluding the iterate as well as the
variables local to the loop.
For example, given the loop below that calculates the geometric and
arithmetic means or some numbers:
geo_mean = 1
arith_mean = 0
for i in range(n):
a = numbers[i]
geo_mean *= a
arith_mean += a
The state is represented by the variables geo_mean and arith_mean. The
argument for initial_state may contain the tuple (1, 0), the body will
include the arguments geo_mean and arith_mean and will return a tuple
representing the new values for geo_mean and respectively arith_mean.
Args:
iter_: The entity being iterated over.
extra_test: Callable with the state as arguments, and boolean return type.
An additional loop condition.
body: Callable with the iterate and the state as arguments, and state as
return type. The actual loop body.
get_state: Additional callable which can capture additional state (such as
the values of composite symbols). This is only useful when staging the
loop.
set_state: Additional callable which save values captured by get_state back
into the Python environment. This is only useful when staging the loop.
init_vars: Tuple containing the initial state.
basic_symbol_names: Tuple containing basic loop var names.
composite_symbol_names: Tuple containing composite loop var names.
opts: Optional dict of extra loop parameters.
Returns:
Tuple containing the final state. |
1168 | 1166 | _py_for_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 364 | function | Overload of for_stmt that executes a Python for loop. |
1169 | 1167 | _known_len_tf_for_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 383 | function | Overload of for_stmt that iterates over TF entities that admit a length. |
1170 | 1168 | _tf_ragged_for_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 446 | function | Overload of for_stmt that iterates over TF ragged tensors. |
1171 | 1169 | _tf_range_for_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 509 | function | Overload of for_stmt that iterates over a TF range (and elides it). |
1172 | 1170 | _tf_iterator_for_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 572 | function | Overload of for_stmt that iterates over TF Iterators. See for_loop. |
1173 | 1171 | _tf_dataset_for_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 636 | function | Overload of for_stmt that iterates over TF Datasets. |
1174 | 1172 | _general_purpose_scan | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 653 | function | Variant of Dataset.scan with semantics of general-purpose computation. |
1175 | 1173 | _dataset_for_stmt_with_extra_test | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 667 | function | Overload of _dataset_for_stmt with early stopping. See for_stmt. |
1176 | 1174 | _dataset_for_stmt_no_extra_test | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 725 | function | Overload of _dataset_for_stmt without early stopping. See for_stmt. |
1177 | 1175 | _tf_distributed_dataset_for_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 794 | function | Overload of for..in statement that iterates over the input. |
1178 | 1176 | while_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 817 | function | Functional form of a while statement.
The loop operates on a so-called state, which includes all symbols that are
variant across loop iterations. In what follows we refer to state as either
a tuple of entities that represent an actual state, or a list of arguments
of the corresponding types.
Args:
test: Callable with the state as arguments, and boolean return type. The
loop condition.
body: Callable with the state as arguments, and state as return type. The
actual loop body.
get_state: Additional callable which can capture additional state (such as
the values of composite symbols). This is only useful when staging the
loop.
set_state: Additional callable which save values captured by get_state back
into the Python environment. This is only useful when staging the loop.
init_vars: Tuple containing the initial state.
basic_symbol_names: Tuple containing basic loop var names.
composite_symbol_names: Tuple containing composite loop var names.
opts: Optional dict of extra loop parameters.
Returns:
Tuple containing the final state. |
1179 | 1177 | _shape_invariants_mapping_to_positional_list | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 873 | function | |
1180 | 1178 | _tf_while_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 883 | function | Overload of while_stmt that stages a TF while_stmt. |
1181 | 1179 | _PythonLoopChecker | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 925 | class | Verifies Python loops for TF-specific limits. |
1182 | 1180 | _py_while_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 987 | function | Overload of while_stmt that executes a Python while loop. |
1183 | 1181 | if_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 1008 | function | Functional form of an if statement.
Args:
cond: Boolean.
body: Callable with no arguments, and outputs of the positive (if) branch as
return type.
orelse: Callable with no arguments, and outputs of the negative (else)
branch as return type.
get_state: Function that returns a tuple containing the values of all
composite symbols modified within the conditional. This allows access to
state that branches may mutate through side effects. This function is not
needed and should not be called when dispatching to code matching Python's
default semantics. This is useful for checkpointing to avoid unintended
side-effects when staging requires evaluating all code-paths.
set_state: Function to set the values of all composite symbols modified
within the conditional. This is the complement to get_state, used to
restore checkpointed values. The single argument a tuple containing values
for each composite symbol that may be modified in a branch of the
conditional. The is usually the result of a call to get_state.
basic_symbol_names: Tuple containing basic loop var names.
composite_symbol_names: Tuple containing composite loop var names.
Returns:
Tuple containing the statement outputs. |
1184 | 1182 | tf_if_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 1048 | function | Overload of if_stmt that stages a TF cond. |
1185 | 1183 | _isolate_state | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 1088 | function | Wraps func to (best-effort) isolate state mutations that func may do.
The simplest example of state mutation is mutation of variables (via e.g.
attributes), or modification of globals.
This allows us to more safely execute this function without worrying about
side effects when the function wasn't normally expected to execute. For
example, staging requires that the function is executed ahead of time, and
we need to ensure its effects are not observed during normal execution.
Args:
func: () -> Any
get_state: () -> Any, returns the current state
set_state: (Any) -> None, resets the state to the specified values.
Typically the result of an earlier call to `get_state`.
Returns:
Tuple[Any, Any], where the first element is the return value of `func`,
and the second is the final state values. |
1186 | 1184 | _wrap_disallow_undefs_from_cond | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 1121 | function | Wraps conditional branch to disallow returning undefined symbols. |
1187 | 1185 | _py_if_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 1152 | function | Overload of if_stmt that executes a Python if statement. |
1188 | 1186 | ForLoopTest | tensorflow/tensorflow/python/autograph/operators/control_flow_test.py | 49 | class | |
1189 | 1187 | WhileLoopTest | tensorflow/tensorflow/python/autograph/operators/control_flow_test.py | 540 | class | |
1190 | 1188 | IfStmtTest | tensorflow/tensorflow/python/autograph/operators/control_flow_test.py | 775 | class | |
1191 | 1189 | new_list | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 36 | function | The list constructor.
Args:
iterable: Optional elements to fill the list with.
Returns:
A list-like object. The exact return value depends on the initial elements. |
1192 | 1190 | tf_tensor_array_new | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 57 | function | Overload of new_list that stages a Tensor list creation. |
1193 | 1191 | tf_tensor_list_new | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 107 | function | Overload of new_list that stages a Tensor list creation. |
1194 | 1192 | _py_list_new | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 166 | function | Overload of new_list that creates a Python list. |
1195 | 1193 | list_append | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 171 | function | The list append function.
Note: it is unspecified where list_ will be mutated or not. If list_ is
a TensorFlow entity, it will not be typically mutated. If list_ is a plain
list, it will be. In general, if the list is mutated then the return value
should point to the original entity.
Args:
list_: An entity that supports append semantics.
x: The element to append.
Returns:
Same as list_, after the append was performed.
Raises:
ValueError: if list_ is not of a known list-like type. |
1196 | 1194 | _tf_tensor_list_append | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 202 | function | Overload of list_append that stages a Tensor list write. |
1197 | 1195 | _tf_tensorarray_append | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 218 | function | Overload of list_append that stages a TensorArray write. |
1198 | 1196 | _py_list_append | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 223 | function | Overload of list_append that executes a Python list append. |
1199 | 1197 | ListPopOpts | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 230 | class | |
1200 | 1198 | list_pop | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 235 | function | The list pop function.
Note: it is unspecified where list_ will be mutated or not. If list_ is
a TensorFlow entity, it will not be typically mutated. If list_ is a plain
list, it will be. In general, if the list is mutated then the return value
should point to the original entity.
Args:
list_: An entity that supports pop semantics.
i: Optional index to pop from. May be None.
opts: A ListPopOpts.
Returns:
Tuple (x, out_list_):
out_list_: same as list_, after the removal was performed.
x: the removed element value.
Raises:
ValueError: if list_ is not of a known list-like type or the operation is
not supported for that type. |
1201 | 1199 | _tf_tensor_list_pop | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 272 | function | Overload of list_pop that stages a Tensor list pop. |
1202 | 1200 | _py_list_pop | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 289 | function | Overload of list_pop that executes a Python list append. |
1203 | 1201 | ListStackOpts | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 299 | class | |
1204 | 1202 | list_stack | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 305 | function | The list stack function.
This does not have a direct correspondent in Python. The closest idiom to
this is tf.append or np.stack. It's different from those in the sense that it
accepts a Tensor list, rather than a list of tensors. It can also accept
TensorArray. When the target is anything else, the dispatcher will rely on
ctx.original_call for fallback.
Args:
list_: An entity that supports append semantics.
opts: A ListStackOpts object.
Returns:
The output of the stack operation, typically a Tensor. |
1205 | 1203 | _tf_tensorarray_stack | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 335 | function | Overload of list_stack that stages a TensorArray stack. |
1206 | 1204 | _tf_tensor_list_stack | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 340 | function | Overload of list_stack that stages a Tensor list write. |
1207 | 1205 | _py_list_stack | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 348 | function | Overload of list_stack that executes a Python list append. |
1208 | 1206 | ListTest | tensorflow/tensorflow/python/autograph/operators/data_structures_test.py | 31 | class | |
1209 | 1207 | DispatchContext | tensorflow/tensorflow/python/autograph/operators/dispatch_context.py | 27 | class | Allows passing additional parameters to the specific implementations.
Attributes:
options: Optional dict of extra arguments that may be required by specific
implementations. |
1210 | 1208 | assert_stmt | tensorflow/tensorflow/python/autograph/operators/exceptions.py | 26 | function | Functional form of an assert statement.
This follows the semantics of the Python assert statement, however the
concrete implementations may deviate from it. See the respective
implementation for details.
In general, the assert statement should not be used for control flow.
Furthermore, it is encouraged that the assertion expressions should not have
side effects.
Args:
expression1: Any
expression2: Callable[[], Any], returns the expression to include in the
error message when expression1 evaluates to False. When expression1 is
True, the result of expression2 will not be evaluated, however,
expression2 itself may be evaluated in some implementations.
Returns:
Any, implementation-dependent.
Raises:
ValueError: if any arguments are illegal. |
1211 | 1209 | _tf_assert_stmt | tensorflow/tensorflow/python/autograph/operators/exceptions.py | 62 | function | Overload of assert_stmt that stages a TF Assert.
This implementation deviates from Python semantics as follows:
(1) the assertion is verified regardless of the state of __debug__
(2) on assertion failure, the graph execution will fail with
tensorflow.errors.ValueError, rather than AssertionError.
Args:
expression1: tensorflow.Tensor, must evaluate to a tf.bool scalar
expression2: Callable[[], Union[tensorflow.Tensor, List[tensorflow.Tensor]]]
Returns:
tensorflow.Operation |
1212 | 1210 | _py_assert_stmt | tensorflow/tensorflow/python/autograph/operators/exceptions.py | 83 | function | Overload of assert_stmt that executes a Python assert statement. |
1213 | 1211 | ExceptionsTest | tensorflow/tensorflow/python/autograph/operators/exceptions_test.py | 28 | class | |
1214 | 1212 | not_ | tensorflow/tensorflow/python/autograph/operators/logical.py | 26 | function | Functional form of "not". |
1215 | 1213 | _tf_not | tensorflow/tensorflow/python/autograph/operators/logical.py | 33 | function | Implementation of the "not_" operator for TensorFlow. |
1216 | 1214 | _py_not | tensorflow/tensorflow/python/autograph/operators/logical.py | 38 | function | Default Python implementation of the "not_" operator. |
1217 | 1215 | and_ | tensorflow/tensorflow/python/autograph/operators/logical.py | 43 | function | Functional form of "and". Uses lazy evaluation semantics. |
1218 | 1216 | _tf_lazy_and | tensorflow/tensorflow/python/autograph/operators/logical.py | 51 | function | Lazy-eval equivalent of "and" for Tensors. |
1219 | 1217 | _py_lazy_and | tensorflow/tensorflow/python/autograph/operators/logical.py | 57 | function | Lazy-eval equivalent of "and" in Python. |
1220 | 1218 | or_ | tensorflow/tensorflow/python/autograph/operators/logical.py | 62 | function | Functional form of "or". Uses lazy evaluation semantics. |
1221 | 1219 | _tf_lazy_or | tensorflow/tensorflow/python/autograph/operators/logical.py | 70 | function | Lazy-eval equivalent of "or" for Tensors. |
1222 | 1220 | _py_lazy_or | tensorflow/tensorflow/python/autograph/operators/logical.py | 76 | function | Lazy-eval equivalent of "or" in Python. |
1223 | 1221 | eq | tensorflow/tensorflow/python/autograph/operators/logical.py | 81 | function | Functional form of "equal". |
1224 | 1222 | _tf_equal | tensorflow/tensorflow/python/autograph/operators/logical.py | 88 | function | Overload of "equal" for Tensors. |
1225 | 1223 | _py_equal | tensorflow/tensorflow/python/autograph/operators/logical.py | 93 | function | Overload of "equal" that falls back to Python's default implementation. |
1226 | 1224 | not_eq | tensorflow/tensorflow/python/autograph/operators/logical.py | 98 | function | Functional form of "not-equal". |
1227 | 1225 | LogicalOperatorsTest | tensorflow/tensorflow/python/autograph/operators/logical_test.py | 27 | class | |
1228 | 1226 | overload_of | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 65 | function | |
1229 | 1227 | _find_originating_frame | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 71 | function | Locates the frame in which `caller_fn_scope` was defined. |
1230 | 1228 | locals_in_original_context | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 92 | function | Executes the locals function in the context of a specified function. |
1231 | 1229 | globals_in_original_context | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 97 | function | Executes the locals function in the context of a specified function. |
1232 | 1230 | eval_in_original_context | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 102 | function | Executes the eval function in the context of a specified function. |
1233 | 1231 | super_in_original_context | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 117 | function | Executes the super function in the context of a specified function.
See https://docs.python.org/3/library/functions.html#super for the exact
details
Args:
f: Callable, typically the super builtin
args: List[Any], the original call arguments
caller_fn_scope: Optional[function_wrappers.FunctionScope], the function
scope of the converted function in which this call was originally made
Returns:
The result of calling `f` as if it was called in the frame indicated by
`caller_fn_scope`. |
1234 | 1232 | abs_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 179 | function | |
1235 | 1233 | _tf_abs | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 187 | function | |
1236 | 1234 | _tf_dataset_abs | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 191 | function | |
1237 | 1235 | _py_abs | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 198 | function | |
1238 | 1236 | float_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 202 | function | |
1239 | 1237 | _tf_float | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 208 | function | |
1240 | 1238 | _py_float | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 215 | function | |
1241 | 1239 | int_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 219 | function | |
1242 | 1240 | _tf_int | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 225 | function | |
1243 | 1241 | _py_int | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 235 | function | |
1244 | 1242 | len_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 241 | function | |
1245 | 1243 | _tf_tensor_array_len | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 253 | function | |
1246 | 1244 | _tf_tensor_list_len | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 257 | function | |
1247 | 1245 | _tf_tensor_len | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 261 | function | Overload of len_ for Tensor arguments. |
1248 | 1246 | _tf_dataset_len | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 294 | function | |
1249 | 1247 | _py_len | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 314 | function | |
1250 | 1248 | print_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 318 | function | Overload of the print builtin. |
1251 | 1249 | _py_print | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 334 | function | |
1252 | 1250 | _tf_py_func_print | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 338 | function | Overload of print_ as a py_func implementation. |
1253 | 1251 | range_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 360 | function | |
1254 | 1252 | _tf_range | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 366 | function | Overload of range_ that generates a TF range tensor. |
1255 | 1253 | _py_range | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 383 | function | |
1256 | 1254 | enumerate_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 391 | function | |
1257 | 1255 | _tf_dataset_enumerate | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 401 | function | |
1258 | 1256 | _py_enumerate | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 405 | function | |
1259 | 1257 | zip_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 409 | function | |
1260 | 1258 | _tf_dataset_zip | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 415 | function | |
1261 | 1259 | _py_zip | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 419 | function | |
1262 | 1260 | map_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 423 | function | |
1263 | 1261 | _tf_dataset_map | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 429 | function | |
1264 | 1262 | _py_map | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 433 | function | |
1265 | 1263 | next_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 437 | function | |
1266 | 1264 | _verify_spec_compatible | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 444 | function | Verifies that a symbol has a type compatible vith a given spec.
Here, compatibility is viewed in the general TensorFlow sense: that the dtypes
are the same after implicit conversion, if both are tensors.
This verifier ensures consistent treatment of types across AutoGraph.
Args:
input_name: A name to use for `input_` in error messages.
spec_name: A name to use for `spec` in error messages.
input_: Any, value to verify.
spec: TypeSpec that `input_` must be compatible with.
Raises:
ValueError if the two types have been determined not to be compatible. |
1267 | 1265 | _verify_structure_compatible | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 480 | function | Verifies that possibly-structured symbol has types compatible vith another.
See _verify_spec_compatible for a more concrete meaning of "compatible".
Unspec _verify_spec_compatible, which handles singular Tensor-spec objects,
verify_structures_compatible can process structures recognized by tf.nest.
Args:
input_name: A name to use for `input_` in error messages.
spec_name: A name to use for `spec` in error messages.
input_: Any, value to verify. May, but doesn't need to, be a structure.
spec: Any, value that `input_` must be compatible with. May, but doesn't
need to, be a structure.
Raises:
ValueError if the two types have been determined not to be compatible. |
1268 | 1266 | next_tf_iterator | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 509 | function | |
1269 | 1267 | next_py | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 521 | function | |
1270 | 1268 | filter_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 527 | function | |
1271 | 1269 | _tf_dataset_filter | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 533 | function | |
1272 | 1270 | _py_filter | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 537 | function | |
1273 | 1271 | any_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 541 | function | |
1274 | 1272 | _tf_dataset_any | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 552 | function | |
1275 | 1273 | _py_any | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 566 | function | |
1276 | 1274 | all_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 570 | function | |
1277 | 1275 | _tf_dataset_all | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 578 | function | |
1278 | 1276 | _py_all | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 592 | function | |
1279 | 1277 | sorted_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 596 | function | |
1280 | 1278 | _tf_sorted | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 602 | function | Overload of sorted_ for Tensor iterable. |
1281 | 1279 | _py_sorted | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 627 | function | |
1282 | 1280 | TestBase | tensorflow/tensorflow/python/autograph/operators/py_builtins_test.py | 41 | class | |
1283 | 1281 | PyBuiltinsTest | tensorflow/tensorflow/python/autograph/operators/py_builtins_test.py | 48 | class | |
1284 | 1282 | GetItemOpts | tensorflow/tensorflow/python/autograph/operators/slices.py | 34 | class | |
1285 | 1283 | get_item | tensorflow/tensorflow/python/autograph/operators/slices.py | 38 | function | The slice read operator (i.e. __getitem__).
Note: it is unspecified whether target will be mutated or not. In general,
if target is mutable (like Python lists), it will be mutated.
Args:
target: An entity that supports getitem semantics.
i: Index to read from.
opts: A GetItemOpts object.
Returns:
The read element.
Raises:
ValueError: if target is not of a supported type. |
1286 | 1284 | _tf_tensorarray_get_item | tensorflow/tensorflow/python/autograph/operators/slices.py | 70 | function | Overload of get_item that stages a TensorArray read. |
1287 | 1285 | _tf_tensor_list_get_item | tensorflow/tensorflow/python/autograph/operators/slices.py | 75 | function | Overload of get_item that stages a Tensor list read. |
1288 | 1286 | _tf_tensor_get_item | tensorflow/tensorflow/python/autograph/operators/slices.py | 84 | function | Overload of get_item that stages a Tensor (not Tensor list) read. |
1289 | 1287 | _tf_tensor_string_get_item | tensorflow/tensorflow/python/autograph/operators/slices.py | 89 | function | Overload of get_item that stages a Tensor string read. |
1290 | 1288 | _py_get_item | tensorflow/tensorflow/python/autograph/operators/slices.py | 95 | function | Overload of get_item that executes a Python list modification. |
1291 | 1289 | set_item | tensorflow/tensorflow/python/autograph/operators/slices.py | 100 | function | The slice write operator (i.e. __setitem__).
Note: it is unspecified whether target will be mutated or not. In general,
if target is mutable (like Python lists), it will be mutated.
Args:
target: An entity that supports setitem semantics.
i: Index to modify.
x: The new element value.
Returns:
Same as target, after the update was performed.
Raises:
ValueError: if target is not of a supported type. |
1292 | 1290 | _tf_tensorarray_set_item | tensorflow/tensorflow/python/autograph/operators/slices.py | 128 | function | Overload of set_item that stages a TensorArray write. |
1293 | 1291 | _tf_tensor_list_set_item | tensorflow/tensorflow/python/autograph/operators/slices.py | 133 | function | Overload of set_item that stages a Tensor list update. |
1294 | 1292 | _tf_tensor_set_item | tensorflow/tensorflow/python/autograph/operators/slices.py | 138 | function | Overload of set_item that stages a Tensor scatter update. |
1295 | 1293 | _py_set_item | tensorflow/tensorflow/python/autograph/operators/slices.py | 143 | function | Overload of set_item that executes a Python list modification. |
1296 | 1294 | SlicesTest | tensorflow/tensorflow/python/autograph/operators/slices_test.py | 27 | class | |
1297 | 1295 | ld | tensorflow/tensorflow/python/autograph/operators/variables.py | 22 | function | Load variable operator. |
1298 | 1296 | ldu | tensorflow/tensorflow/python/autograph/operators/variables.py | 29 | function | Load variable operator that returns Undefined when failing to evaluate.
Note: the name ("load or return undefined") is abbreviated to minimize
the amount of clutter in generated code.
This variant of `ld` is useful when loading symbols that may be undefined at
runtime, such as composite symbols, and whether they are defined or not cannot
be determined statically. For example `d['a']` is undefined when `d` is an
empty dict.
Args:
load_v: Lambda that executes the actual read.
name: Human-readable name of the symbol being read.
Returns:
Either the value of the symbol, or Undefined, if the symbol is not fully
defined. |
1299 | 1297 | Undefined | tensorflow/tensorflow/python/autograph/operators/variables.py | 54 | class | Represents an undefined symbol in Python.
This is used to reify undefined symbols, which is required to use the
functional form of loops.
Example:
while n > 0:
n = n - 1
s = n
return s # Runtime error if n == 0
This is valid Python code and will not result in an error as long as n
is positive. The use of this class is to stay as close to Python semantics
as possible for staged code of this nature.
Converted version of the above showing the possible usage of this class:
s = Undefined('s')
init_state = (s,)
s = while_loop(cond, body, init_state)
return s # s is an instance of Undefined if the loop never runs
Attributes:
symbol_name: Text, identifier for the undefined symbol |
1300 | 1298 | UndefinedReturnValue | tensorflow/tensorflow/python/autograph/operators/variables.py | 106 | class | Represents a return value that is undefined. |
1301 | 1299 | SpecialValuesTest | tensorflow/tensorflow/python/autograph/operators/variables_test.py | 25 | class | |
1302 | 1300 | NoValue | tensorflow/tensorflow/python/autograph/pyct/anno.py | 37 | class | |
1303 | 1301 | Basic | tensorflow/tensorflow/python/autograph/pyct/anno.py | 43 | class | Container for basic annotation keys.
The enum values are used strictly for documentation purposes. |
1304 | 1302 | Static | tensorflow/tensorflow/python/autograph/pyct/anno.py | 67 | class | Container for static analysis annotation keys.
The enum values are used strictly for documentation purposes. |
1305 | 1303 | keys | tensorflow/tensorflow/python/autograph/pyct/anno.py | 110 | function | |
1306 | 1304 | getanno | tensorflow/tensorflow/python/autograph/pyct/anno.py | 116 | function | |
1307 | 1305 | hasanno | tensorflow/tensorflow/python/autograph/pyct/anno.py | 123 | function | |
1308 | 1306 | setanno | tensorflow/tensorflow/python/autograph/pyct/anno.py | 127 | function | |
1309 | 1307 | delanno | tensorflow/tensorflow/python/autograph/pyct/anno.py | 137 | function | |
1310 | 1308 | copyanno | tensorflow/tensorflow/python/autograph/pyct/anno.py | 145 | function | |
1311 | 1309 | dup | tensorflow/tensorflow/python/autograph/pyct/anno.py | 154 | function | Recursively copies annotations in an AST tree.
Args:
node: ast.AST
copy_map: Dict[Hashable, Hashable], maps a source anno key to a destination
key. All annotations with the source key will be copied to identical
annotations with the destination key.
field_name: str |
1312 | 1310 | AnnoTest | tensorflow/tensorflow/python/autograph/pyct/anno_test.py | 30 | class | |
1313 | 1311 | CleanCopier | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 30 | class | NodeTransformer-like visitor that copies an AST. |
1314 | 1312 | copy_clean | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 63 | function | Creates a deep copy of an AST.
The copy will not include fields that are prefixed by '__', with the
exception of user-specified annotations.
Args:
node: ast.AST
preserve_annos: Optional[Set[Hashable]], annotation keys to include in the
copy
Returns:
ast.AST |
1315 | 1313 | SymbolRenamer | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 79 | class | Transformer that can rename symbols to a simple names. |
1316 | 1314 | rename_symbols | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 130 | function | Renames symbols in an AST. Requires qual_names annotations. |
1317 | 1315 | keywords_to_dict | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 140 | function | Converts a list of ast.keyword objects to a dict. |
1318 | 1316 | PatternMatcher | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 150 | class | Matches a node against a pattern represented by a node. |
1319 | 1317 | matches | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 214 | function | Basic pattern matcher for AST.
The pattern may contain wildcards represented by the symbol '_'. A node
matches a pattern if for every node in the tree, either there is a node of
the same type in pattern, or a Name node with id='_'.
Args:
node: ast.AST
pattern: ast.AST
Returns:
bool |
1320 | 1318 | apply_to_single_assignments | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 236 | function | Applies a function to each individual assignment.
This function can process a possibly-unpacked (e.g. a, b = c, d) assignment.
It tries to break down the unpacking if possible. In effect, it has the same
effect as passing the assigned values in SSA form to apply_fn.
Examples:
The following will result in apply_fn(a, c), apply_fn(b, d):
a, b = c, d
The following will result in apply_fn(a, c[0]), apply_fn(b, c[1]):
a, b = c
The following will result in apply_fn(a, (b, c)):
a = b, c
It uses the visitor pattern to allow subclasses to process single
assignments individually.
Args:
targets: Union[List[ast.AST, ...], Tuple[ast.AST, ...], ast.AST, should be
used with the targets field of an ast.Assign node
values: ast.AST
apply_fn: Callable[[ast.AST, ast.AST], None], called with the
respective nodes of each single assignment |
1321 | 1319 | parallel_walk | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 283 | function | Walks two ASTs in parallel.
The two trees must have identical structure.
Args:
node: Union[ast.AST, Iterable[ast.AST]]
other: Union[ast.AST, Iterable[ast.AST]]
Yields:
Tuple[ast.AST, ast.AST]
Raises:
ValueError: if the two trees don't have identical structure. |
1322 | 1320 | AstUtilTest | tensorflow/tensorflow/python/autograph/pyct/ast_util_test.py | 35 | class | |
1323 | 1321 | _TransformedFnCache | tensorflow/tensorflow/python/autograph/pyct/cache.py | 26 | class | Generic hierarchical cache for transformed functions.
The keys are soft references (i.e. they are discarded when the key is
destroyed) created from the source function by `_get_key`. The subkeys are
strong references and can be any value. Typically they identify different
kinds of transformation. |
1324 | 1322 | CodeObjectCache | tensorflow/tensorflow/python/autograph/pyct/cache.py | 63 | class | A function cache based on code objects.
Code objects are good proxies for the source code of a function.
This cache efficiently handles functions that share code objects, such as
functions defined in a loop, bound methods, etc.
The cache falls back to the function object, if it doesn't have a code object. |
1325 | 1323 | UnboundInstanceCache | tensorflow/tensorflow/python/autograph/pyct/cache.py | 81 | class | A function cache based on unbound function objects.
Using the function for the cache key allows efficient handling of object
methods.
Unlike the _CodeObjectCache, this discriminates between different functions
even if they have the same code. This is needed for decorators that may
masquerade as another function. |
1326 | 1324 | CacheTest | tensorflow/tensorflow/python/autograph/pyct/cache_test.py | 25 | class | |
1327 | 1325 | Node | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 54 | class | A node in the CFG.
Although new instances of this class are mutable, the objects that a user
finds in the CFG are typically not.
The nodes represent edges in the CFG graph, and maintain pointers to allow
efficient walking in both forward and reverse order. The following property
holds for all nodes: "child in node.next" iff "node in child.prev".
Attributes:
next: FrozenSet[Node, ...], the nodes that follow this node, in control
flow order
prev: FrozenSet[Node, ...], the nodes that precede this node, in reverse
control flow order
ast_node: ast.AST, the AST node corresponding to this CFG node |
1328 | 1326 | Graph | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 95 | class | A Control Flow Graph.
The CFG maintains an index to allow looking up a CFG node by the AST node to
which it is associated. The index can also be enumerated in top-down, depth
first order.
Walking the graph in forward or reverse order is supported by double
parent-child links.
Note: the error nodes are not wired to their corresponding finally guards,
because these are shared, and wiring them would create a reverse path from
normal control flow into the error nodes, which we want to avoid.
The graph also maintains edges corresponding to higher level statements
like for-else loops. A node is considered successor of a statement if there
is an edge from a node that is lexically a child of that statement to a node
that is not. Statement predecessors are analogously defined.
Attributes:
entry: Node, the entry node
exit: FrozenSet[Node, ...], the exit nodes
error: FrozenSet[Node, ...], nodes that exit due to an explicitly raised
error (errors propagated from function calls are not accounted)
index: Dict[ast.Node, Node], mapping AST nodes to the respective CFG
node
stmt_prev: Dict[ast.Node, FrozenSet[Node, ...]], mapping statement AST
nodes to their predecessor CFG nodes
stmt_next: Dict[ast.Node, FrozenSet[Node, ...]], mapping statement AST
nodes to their successor CFG nodes |
1329 | 1327 | _WalkMode | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 145 | class | |
1330 | 1328 | GraphVisitor | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 152 | class | Base class for a CFG visitors.
This implementation is not thread safe.
The visitor has some facilities to simplify dataflow analyses. In particular,
it allows revisiting the nodes at the decision of the subclass. This can be
used to visit the graph until the state reaches a fixed point.
For more details on dataflow analysis, see
https://www.seas.harvard.edu/courses/cs252/2011sp/slides/Lec02-Dataflow.pdf
Note: the literature generally suggests visiting successor nodes only when the
state of the current node changed, regardless of whether that successor has
ever been visited. This implementation visits every successor at least once.
Attributes:
graph: Graph
in_: Dict[Node, Any], stores node-keyed state during a visit
out: Dict[Node, Any], stores node-keyed state during a visit |
1331 | 1329 | GraphBuilder | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 252 | class | Builder that constructs a CFG from a given AST.
This GraphBuilder facilitates constructing the DAG that forms the CFG when
nodes
are supplied in lexical order (i.e., top-down, depth first). Under these
conditions, it supports building patterns found in typical structured
programs.
This builder ignores the flow generated by exceptions, which are assumed to
always be catastrophic and present purely for diagnostic purposes (e.g. to
print debug information). Statements like raise and try/catch sections are
allowed and will generate control flow edges, but ordinary statements are
assumed not to raise exceptions.
Finally sections are also correctly interleaved between break/continue/return
nodes and their subsequent statements.
Important concepts:
* nodes - nodes refer refer to CFG nodes; AST nodes are qualified explicitly
* leaf set - since the graph is constructed gradually, a leaf set maintains
the CFG nodes that will precede the node that the builder expects to
receive next; when an ordinary node is added, it is connected to the
existing leaves and it in turn becomes the new leaf
* jump nodes - nodes that should generate edges other than what
ordinary nodes would; these correspond to break, continue and return
statements
* sections - logical delimiters for subgraphs that require special
edges; there are various types of nodes, each admitting various
types of jump nodes; sections are identified by their corresponding AST
node |
1332 | 1330 | AstToCfg | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 647 | class | Converts an AST to CFGs.
A separate CFG will be constructed for each function. |
1333 | 1331 | build | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 964 | function | |
1334 | 1332 | CountingVisitor | tensorflow/tensorflow/python/autograph/pyct/cfg_test.py | 28 | class | |
1335 | 1333 | GraphVisitorTest | tensorflow/tensorflow/python/autograph/pyct/cfg_test.py | 42 | class | |
1336 | 1334 | AstToCfgTest | tensorflow/tensorflow/python/autograph/pyct/cfg_test.py | 93 | class | |
1337 | 1335 | FrameInfo | tensorflow/tensorflow/python/autograph/pyct/error_utils.py | 26 | class | |
1338 | 1336 | _stack_trace_inside_mapped_code | tensorflow/tensorflow/python/autograph/pyct/error_utils.py | 34 | function | Summarizes inner traceback frames up to the call to a given function.
This functions locates the innermost (i.e. most recent) frame that corresponds
to code that can be mapped by source_map originated from, and returns a
translated stack trace ending at that frame. If no such frame is found, the
entire stack trace is summarized.
For example, the following code:
def f():
for i in tf.range(1):
z = y + i # z only defined here
Would generate this traceback:
<converted code>
ag__.for_stmt(...)
<for_stmt>
return _known_len_tf_for_stmt(iter_, extra_test, body, init_state)
<_known_len_tf_for_stmt>
_disallow_undefs_into_loop(*init_state)
<_disallow_undefs_into_loop>
raise ...
Which is then processed into:
<f>
for i in tf.range(1):
<for_stmt>
return _known_len_tf_for_stmt(iter_, extra_test, body, init_state)
<_known_len_tf_for_stmt>
_disallow_undefs_into_loop(*init_state)
<_disallow_undefs_into_loop>
raise ...
Args:
tb: traceback.FrameSummary, The traceback corresponding to an error.
Typically, the output of traceback.Summary.extract(capture_locals=True).
source_map: Dict[LineLocation, OriginInfo], a source map as created by
origin_info.create_source_map.
converter_filename: str, the file path of the converted module. Call frames
corresponding to this module are elided and their preceding frames are
marked as allowlisted. Note that frames enclosing converted code are
dropped using a different mechanism.
Returns:
List[FrameInfo] |
1339 | 1337 | MultilineMessageKeyError | tensorflow/tensorflow/python/autograph/pyct/error_utils.py | 141 | class | |
1340 | 1338 | ErrorMetadataBase | tensorflow/tensorflow/python/autograph/pyct/error_utils.py | 153 | class | Container objects attached to exceptions raised in user code.
This metadata allows re-raising exceptions that occur in generated code, with
a custom error message that includes a stack trace relative to user-readable
code from which the generated code originated. |
1341 | 1339 | ErrorMetadataBaseTest | tensorflow/tensorflow/python/autograph/pyct/error_utils_test.py | 28 | class | |
1342 | 1340 | PyCTError | tensorflow/tensorflow/python/autograph/pyct/errors.py | 22 | class | Base class for all exceptions. |
1343 | 1341 | UnsupportedLanguageElementError | tensorflow/tensorflow/python/autograph/pyct/errors.py | 27 | class | Raised for code patterns that AutoGraph does not support. |
1344 | 1342 | _is_constant_gast_2 | tensorflow/tensorflow/python/autograph/pyct/gast_util.py | 31 | function | |
1345 | 1343 | _is_constant_gast_3 | tensorflow/tensorflow/python/autograph/pyct/gast_util.py | 36 | function | |
1346 | 1344 | is_literal | tensorflow/tensorflow/python/autograph/pyct/gast_util.py | 40 | function | Tests whether node represents a Python literal. |
1347 | 1345 | _is_ellipsis_gast_2 | tensorflow/tensorflow/python/autograph/pyct/gast_util.py | 53 | function | |
1348 | 1346 | _is_ellipsis_gast_3 | tensorflow/tensorflow/python/autograph/pyct/gast_util.py | 57 | function | |
1349 | 1347 | islambda | tensorflow/tensorflow/python/autograph/pyct/inspect_utils.py | 60 | function | |
1350 | 1348 | isnamedtuple | tensorflow/tensorflow/python/autograph/pyct/inspect_utils.py | 68 | function | Returns True if the argument is a namedtuple-like. |
1351 | 1349 | isbuiltin | tensorflow/tensorflow/python/autograph/pyct/inspect_utils.py | 82 | function | Returns True if the argument is a built-in function. |
1352 | 1350 | isconstructor | tensorflow/tensorflow/python/autograph/pyct/inspect_utils.py | 96 | function | Returns True if the argument is an object constructor.
In general, any object of type class is a constructor, with the exception
of classes created using a callable metaclass.
See below for why a callable metaclass is not a trivial combination:
https://docs.python.org/2.7/reference/datamodel.html#customizing-class-creation
Args:
cls: Any
Returns:
Bool |
1353 | 1351 | _fix_linecache_record | tensorflow/tensorflow/python/autograph/pyct/inspect_utils.py | 116 | function | Fixes potential corruption of linecache in the presence of functools.wraps.
functools.wraps modifies the target object's __module__ field, which seems
to confuse linecache in special instances, for example when the source is
loaded from a .par file (see https://google.github.io/subpar/subpar.html).
This function simply triggers a call to linecache.updatecache when a mismatch
was detected between the object's __module__ property and the object's source
file.
Args:
obj: Any |
1354 | 1352 | getimmediatesource | tensorflow/tensorflow/python/autograph/pyct/inspect_utils.py | 143 | function | A variant of inspect.getsource that ignores the __wrapped__ property. |
1355 | 1353 | getnamespace | tensorflow/tensorflow/python/autograph/pyct/inspect_utils.py | 151 | function | Returns the complete namespace of a function.
Namespace is defined here as the mapping of all non-local variables to values.
This includes the globals and the closure variables. Note that this captures
the entire globals collection of the function, and may contain extra symbols
that it does not actually use.
Args:
f: User defined function.
Returns:
A dict mapping symbol names to values. |
1356 | 1354 | getqualifiedname | tensorflow/tensorflow/python/autograph/pyct/inspect_utils.py | 177 | function | Returns the name by which a value can be referred to in a given namespace.
If the object defines a parent module, the function attempts to use it to
locate the object.
This function will recurse inside modules, but it will not search objects for
attributes. The recursion depth is controlled by max_depth.
Args:
namespace: Dict[str, Any], the namespace to search into.
object_: Any, the value to search.
max_depth: Optional[int], a limit to the recursion depth when searching
inside modules.
visited: Optional[Set[int]], ID of modules to avoid visiting.
Returns: Union[str, None], the fully-qualified name that resolves to the value
o, or None if it couldn't be found. |
1357 | 1355 | _get_unbound_function | tensorflow/tensorflow/python/autograph/pyct/inspect_utils.py | 240 | function | |
1358 | 1356 | getdefiningclass | tensorflow/tensorflow/python/autograph/pyct/inspect_utils.py | 250 | function | Resolves the class (e.g. one of the superclasses) that defined a method. |
1359 | 1357 | getmethodclass | tensorflow/tensorflow/python/autograph/pyct/inspect_utils.py | 265 | function | Resolves a function's owner, e.g. a method's class.
Note that this returns the object that the function was retrieved from, not
necessarily the class where it was defined.
This function relies on Python stack frame support in the interpreter, and
has the same limitations that inspect.currentframe.
Limitations. This function will only work correctly if the owned class is
visible in the caller's global or local variables.
Args:
m: A user defined function
Returns:
The class that this function was retrieved from, or None if the function
is not an object or class method, or the class that owns the object or
method is not visible to m.
Raises:
ValueError: if the class could not be resolved for any unexpected reason. |
1360 | 1358 | getfutureimports | tensorflow/tensorflow/python/autograph/pyct/inspect_utils.py | 339 | function | Detects what future imports are necessary to safely execute entity source.
Args:
entity: Any object
Returns:
A tuple of future strings |
1361 | 1359 | decorator | tensorflow/tensorflow/python/autograph/pyct/inspect_utils_test.py | 37 | function | |
1362 | 1360 | function_decorator | tensorflow/tensorflow/python/autograph/pyct/inspect_utils_test.py | 41 | function | |
1363 | 1361 | wrapping_decorator | tensorflow/tensorflow/python/autograph/pyct/inspect_utils_test.py | 47 | function | |
1364 | 1362 | TestClass | tensorflow/tensorflow/python/autograph/pyct/inspect_utils_test.py | 59 | class | |
1365 | 1363 | free_function | tensorflow/tensorflow/python/autograph/pyct/inspect_utils_test.py | 85 | function | |
1366 | 1364 | factory | tensorflow/tensorflow/python/autograph/pyct/inspect_utils_test.py | 89 | function | |
1367 | 1365 | free_factory | tensorflow/tensorflow/python/autograph/pyct/inspect_utils_test.py | 93 | function | |
1368 | 1366 | InspectUtilsTest | tensorflow/tensorflow/python/autograph/pyct/inspect_utils_test.py | 99 | class | |
1369 | 1367 | _remove_file | tensorflow/tensorflow/python/autograph/pyct/loader.py | 37 | function | Remove a file, if it exists. |
1370 | 1368 | load_source | tensorflow/tensorflow/python/autograph/pyct/loader.py | 50 | function | Loads the given source code as a Python module. |
1371 | 1369 | load_ast | tensorflow/tensorflow/python/autograph/pyct/loader.py | 70 | function | Loads the given AST as a Python module.
Compiling the AST code this way ensures that the source code is readable by
e.g. `pdb` or `inspect`.
Args:
nodes: Union[ast.AST, Iterable[ast.AST]], the code to compile, as an AST
object.
indentation: Text, the string to use for indentation.
include_source_map: bool, whether return a source map.
delete_on_exit: bool, whether to delete the temporary file used for
compilation on exit.
Returns:
Tuple[module, Text, Dict[LineLocation, OriginInfo]], containing:
the module containing the unparsed nodes, the source code corresponding to
nodes, and the source map. Is include_source_map is False, the source map
will be None. |
1372 | 1370 | load_source | tensorflow/tensorflow/python/autograph/pyct/loader_deprecated_py2.py | 40 | function | Loads the given source code as a Python module. |
1373 | 1371 | load_ast | tensorflow/tensorflow/python/autograph/pyct/loader_deprecated_py2.py | 58 | function | Loads the given AST as a Python module.
Compiling the AST code this way ensures that the source code is readable by
e.g. `pdb` or `inspect`.
Args:
nodes: Union[ast.AST, Iterable[ast.AST]], the code to compile, as an AST
object.
indentation: Text, the string to use for indentation.
include_source_map: bool, whether return a source map.
delete_on_exit: bool, whether to delete the temporary file used for
compilation on exit.
Returns:
Tuple[module, Text, Dict[LineLocation, OriginInfo]], containing:
the module containing the unparsed nodes, the source code corresponding to
nodes, and the source map. Is include_source_map is False, the source map
will be None. |
1374 | 1372 | LoaderTest | tensorflow/tensorflow/python/autograph/pyct/loader_test.py | 33 | class | |
1375 | 1373 | Namer | tensorflow/tensorflow/python/autograph/pyct/naming.py | 24 | class | Symbol name generator. |
1376 | 1374 | NamerTest | tensorflow/tensorflow/python/autograph/pyct/naming_test.py | 25 | class | |
1377 | 1375 | LineLocation | tensorflow/tensorflow/python/autograph/pyct/origin_info.py | 35 | class | Similar to Location, but without column information.
Attributes:
filename: Text
lineno: int, 1-based |
1378 | 1376 | Location | tensorflow/tensorflow/python/autograph/pyct/origin_info.py | 46 | class | Encodes code location information.
Attributes:
filename: Text
lineno: int, 1-based
col_offset: int
line_loc: LineLocation |
1379 | 1377 | OriginInfo | tensorflow/tensorflow/python/autograph/pyct/origin_info.py | 62 | class | Container for information about the source code before conversion.
Attributes:
loc: Location
function_name: Optional[Text]
source_code_line: Text
comment: Optional[Text] |
1380 | 1378 | create_source_map | tensorflow/tensorflow/python/autograph/pyct/origin_info.py | 89 | function | Creates a source map between an annotated AST and the code it compiles to.
Note: this function assumes nodes nodes, code and filepath correspond to the
same code.
Args:
nodes: Iterable[ast.AST, ...], one or more AST modes.
code: Text, the source code in which nodes are found.
filepath: Text
Returns:
Dict[LineLocation, OriginInfo], mapping locations in code to locations
indicated by origin annotations in node. |
1381 | 1379 | _Function | tensorflow/tensorflow/python/autograph/pyct/origin_info.py | 160 | class | |
1382 | 1380 | OriginResolver | tensorflow/tensorflow/python/autograph/pyct/origin_info.py | 166 | class | Annotates an AST with additional source information like file name. |
1383 | 1381 | resolve | tensorflow/tensorflow/python/autograph/pyct/origin_info.py | 226 | function | Adds origin information to an AST, based on the source it was loaded from.
This allows us to map the original source code line numbers to generated
source code.
Note: the AST may be a part of a larger context (e.g. a function is part of
a module that may contain other things). However, this function does not
assume the source argument contains the entire context, nor that it contains
only code corresponding to node itself. However, it assumes that node was
parsed from the given source code.
For this reason, two extra arguments are required, and they indicate the
location of the node in the original context.
Args:
node: gast.AST, the AST to annotate.
source: Text, the source code representing node.
context_filepath: Text
context_lineno: int
context_col_offset: int |
1384 | 1382 | resolve_entity | tensorflow/tensorflow/python/autograph/pyct/origin_info.py | 271 | function | Like resolve, but extracts the context information from an entity. |
1385 | 1383 | OriginInfoTest | tensorflow/tensorflow/python/autograph/pyct/origin_info_test.py | 34 | class | |
1386 | 1384 | _unfold_continuations | tensorflow/tensorflow/python/autograph/pyct/parser.py | 60 | function | Removes any backslash line continuations from the code. |
1387 | 1385 | dedent_block | tensorflow/tensorflow/python/autograph/pyct/parser.py | 65 | function | Dedents a code so that its first line starts at row zero. |
1388 | 1386 | parse_entity | tensorflow/tensorflow/python/autograph/pyct/parser.py | 133 | function | Returns the AST and source code of given entity.
Args:
entity: Any, Python function/method/class
future_features: Iterable[Text], future features to use (e.g.
'print_statement'). See
https://docs.python.org/2/reference/simple_stmts.html#future
Returns:
gast.AST, Text: the parsed AST node; the source code that was parsed to
generate the AST (including any prefixes that this function may have added). |
1389 | 1387 | _without_context | tensorflow/tensorflow/python/autograph/pyct/parser.py | 169 | function | Returns a clean node and source code without indenting and context. |
1390 | 1388 | _arg_name | tensorflow/tensorflow/python/autograph/pyct/parser.py | 203 | function | |
1391 | 1389 | _node_matches_argspec | tensorflow/tensorflow/python/autograph/pyct/parser.py | 212 | function | Returns True is node fits the argspec of func. |
1392 | 1390 | _parse_lambda | tensorflow/tensorflow/python/autograph/pyct/parser.py | 234 | function | Returns the AST and source code of given lambda function.
Args:
lam: types.LambdaType, Python function/method/class
Returns:
gast.AST, Text: the parsed AST node; the source code that was parsed to
generate the AST (including any prefixes that this function may have added). |
1393 | 1391 | parse | tensorflow/tensorflow/python/autograph/pyct/parser.py | 323 | function | Returns the AST of given piece of code.
Args:
src: Text
preamble_len: Int, indicates leading nodes in the parsed AST which should be
dropped.
single_node: Bool, whether `src` is assumed to be represented by exactly one
AST node.
Returns:
ast.AST |
1394 | 1392 | parse_expression | tensorflow/tensorflow/python/autograph/pyct/parser.py | 347 | function | Returns the AST of given identifier.
Args:
src: A piece of code that represents a single Python expression
Returns:
A gast.AST object.
Raises:
ValueError: if src does not consist of a single Expression. |
1395 | 1393 | unparse | tensorflow/tensorflow/python/autograph/pyct/parser.py | 366 | function | Returns the source code of given AST.
Args:
node: The code to compile, as an AST object.
indentation: Unused, deprecated. The returning code will always be indented
at 4 spaces.
include_encoding_marker: Bool, thether to include a comment on the first
line to explicitly specify UTF-8 encoding.
Returns:
code: The source code generated from the AST object
source_mapping: A mapping between the user and AutoGraph generated code. |
1396 | 1394 | ParserTest | tensorflow/tensorflow/python/autograph/pyct/parser_test.py | 31 | class | |
1397 | 1395 | PrettyPrinter | tensorflow/tensorflow/python/autograph/pyct/pretty_printer.py | 26 | class | Print AST nodes. |
1398 | 1396 | fmt | tensorflow/tensorflow/python/autograph/pyct/pretty_printer.py | 128 | function | |
1399 | 1397 | PrettyPrinterTest | tensorflow/tensorflow/python/autograph/pyct/pretty_printer_test.py | 28 | class | |
1400 | 1398 | CallerMustSetThis | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 36 | class | |
1401 | 1399 | Symbol | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 40 | class | Represents a Python symbol. |
1402 | 1400 | Literal | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 44 | class | Represents a Python numeric literal. |
1403 | 1401 | QN | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 57 | class | Represents a qualified name. |
1404 | 1402 | QnResolver | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 210 | class | Annotates nodes with QN information.
Note: Not using NodeAnnos to avoid circular dependencies. |
1405 | 1403 | resolve | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 251 | function | |
1406 | 1404 | from_str | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 255 | function | |
1407 | 1405 | QNTest | tensorflow/tensorflow/python/autograph/pyct/qual_names_test.py | 31 | class | |
1408 | 1406 | QNResolverTest | tensorflow/tensorflow/python/autograph/pyct/qual_names_test.py | 183 | class | |
1409 | 1407 | ContextAdjuster | tensorflow/tensorflow/python/autograph/pyct/templates.py | 35 | class | Adjusts the ctx field of nodes to ensure consistency.
This transformer can change the ctx fields of a variable, tuple and other
AST elements that allow one, based on whether the element is being read or
written. |
1410 | 1408 | ReplaceTransformer | tensorflow/tensorflow/python/autograph/pyct/templates.py | 108 | class | Replace AST nodes. |
1411 | 1409 | _convert_to_ast | tensorflow/tensorflow/python/autograph/pyct/templates.py | 218 | function | Converts from a known data type to AST. |
1412 | 1410 | replace | tensorflow/tensorflow/python/autograph/pyct/templates.py | 234 | function | Replaces placeholders in a Python template.
AST Name and Tuple nodes always receive the context that inferred from
the template. However, when replacing more complex nodes (that can potentially
contain Name children), then the caller is responsible for setting the
appropriate context.
Args:
template: A string representing Python code. Any symbol name can be used
that appears in the template code can be used as placeholder.
**replacements: A mapping from placeholder names to (lists of) AST nodes
that these placeholders will be replaced by. String values are also
supported as a shorthand for AST Name nodes with the respective ID.
Returns:
An AST node or list of AST nodes with the replacements made. If the
template was a function, a list will be returned. If the template was a
node, the same node will be returned. If the template was a string, an
AST node will be returned (a `Module` node in the case of a multi-line
string, an `Expr` node otherwise).
Raises:
ValueError: if the arguments are incorrect. |
1413 | 1411 | replace_as_expression | tensorflow/tensorflow/python/autograph/pyct/templates.py | 279 | function | Variant of replace that generates expressions, instead of code blocks. |
1414 | 1412 | _CtxClearer | tensorflow/tensorflow/python/autograph/pyct/templates_test.py | 33 | class | |
1415 | 1413 | _parse_with_unset_ctx | tensorflow/tensorflow/python/autograph/pyct/templates_test.py | 42 | function | |
1416 | 1414 | _CtxChecker | tensorflow/tensorflow/python/autograph/pyct/templates_test.py | 48 | class | |
1417 | 1415 | TemplatesTest | tensorflow/tensorflow/python/autograph/pyct/templates_test.py | 64 | class | |
1418 | 1416 | AnalysisLevel | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 32 | class | |
1419 | 1417 | Context | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 41 | class | Contains information about a source code transformation.
This object is mutable, and is updated during conversion. Not thread safe.
Attributes:
info: EntityInfo, immutable.
namer: naming.Namer.
current_origin: origin_info.OriginInfo, holds the OriginInfo of the last
AST node to be processed successfully. Useful for error handling.
user: An user-supplied context object. The object is opaque to the
infrastructure, but will pe passed through to all custom transformations. |
1420 | 1418 | EntityInfo | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 63 | class | Contains information about a Python entity.
Immutable.
Examples of entities include functions and classes.
Attributes:
name: The name that identifies this entity.
source_code: The entity's source code.
source_file: The entity's source file.
future_features: Tuple[Text], the future features that this entity was
compiled with. See
https://docs.python.org/2/reference/simple_stmts.html#future.
namespace: Dict[str, ], containing symbols visible to the entity (excluding
parameters). |
1421 | 1419 | _StateStack | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 87 | class | Templated context manager.
This class provides syntactic sugar for a stack of objects of known
type. It allows accessing attributes of the object at the top of the stack
directly against this object, which allows for very terse syntax.
For example, this code:
stack = _StateStack(Foo)
stack.enter()
stack.bar
Is equivalent to:
stack = []
stack.append(Foo())
foo = stack[-1]
foo.bar
See _State for more on how this is used.
Attributes:
type: Any, the type of objects that this stack holds
level: int, the current stack depth
stack: List[Any], the actual stack
value: Any, the instance of the object at the top of the stack |
1422 | 1420 | _State | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 159 | class | Syntactic sugar for accessing an instance of a StateStack context manager.
This structure offers syntactic sugar over a dict of stacks of objects
of known type. These structures are useful to keep state during AST walks.
Multiple different scopes can be tracked in parallel. For example:
s = _State()
s[foo].enter()
s[bar].enter() # this will not affect s[foo]
Element access has special semantics:
* keys are a data type
* element values are _StateStack(type=key) objects
* missing elements are automatically added, similarly to defaultdict
For example, the following block :
_State s
s[Foo]
Is equivalent to:
s = {}
if Foo not in s:
s[Foo] = Foo()
s[Foo]
See Base for how it's used. |
1423 | 1421 | NodeStateTracker | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 200 | class | Base class for general-purpose Python code transformation.
This abstract class provides helpful functions, like state tracking within
the scope of arbitrary node, helpers for processing code blocks, debugging,
mapping of transformed code to original code, and others.
Scope-local state tracking: to keep state across nodes, at the level of
(possibly nested) scopes, use enter/exit_local_scope and set/get_local.
You must call enter/exit_local_scope manually, but the transformer detects
when they are not properly paired.
The transformer allows keeping state across calls that is local
to arbitrary nodes and their descendants, using the self.state attribute.
Multiple independent scopes are allowed and automatically constructed.
For example, to keep track of the `If` node that encloses any `Name` node,
one can write:
```
class FooType(object):
def __init__(self):
self.foo_property = None
class DummyTransformer(NodeStateTracker, ast.NodeTransformer):
def visit_If(self, node):
self.state[FooType].enter()
self.state[FooType].foo_property = node
node = self.veneric_visit(node)
self.state[FooType].exit()
return node
def visit_Name(self, node):
self.state[FooType].foo_property # will hold the innermost enclosing if
```
Alternatively, the `enter()`/`exit()` calls can be managed by a `with`
statement:
```
def visit_If(self, node):
with self.state[FooType] as foo:
foo.foo_property = node
return self.generic_visit(node)
``` |
1424 | 1422 | Base | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 360 | class | Base class for general-purpose Python-to-Python code transformation.
This is an extension of ast.NodeTransformer that provides the additional
functions offered by NodeStateTracker. |
1425 | 1423 | CodeGenerator | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 478 | class | Base class for general-purpose Python-to-string code transformation.
Similar to Base, but outputs arbitrary strings instead of a Python AST.
This uses the same visitor mechanism that the standard NodeVisitor uses,
meaning that subclasses write handlers for the different kinds of nodes.
New code is generated using the emit method, which appends to a code buffer
that can be afterwards obtained from code_buffer.
Example:
class SimpleCodeGen(CodeGenerator):
def visitIf(self, node):
self.emit('if ')
self.visit(node.test)
self.emit(' { ')
self.visit(node.body)
self.emit(' } else { ')
self.visit(node.orelse)
self.emit(' } ')
node = ast.parse(...)
gen = SimpleCodeGen()
gen.visit(node)
# gen.code_buffer contains the resulting code |
1426 | 1424 | TransformerTest | tensorflow/tensorflow/python/autograph/pyct/transformer_test.py | 30 | class | |
1427 | 1425 | CodeGeneratorTest | tensorflow/tensorflow/python/autograph/pyct/transformer_test.py | 302 | class | |
1428 | 1426 | _wrap_into_factory | tensorflow/tensorflow/python/autograph/pyct/transpiler.py | 38 | function | Wraps an AST into the body of a factory with consistent lexical context.
The AST is expected to define some symbol with a name given by `entity_name`.
This mechanism ensures that the resulting transformed entity has lexical
scoping identical to that of the source entity, while allowing extra
parametrization.
Two nested factories achieve the following:
1. The inner factory dynamically creates the entity represented by `nodes`.
2. The inner factory is parametrized by a custom set of arguments.
3. The inner factory has a closure identical to that of the transformed
entity.
4. The inner factory has local variables named like `args`, which `nodes` may
use as additional parameters.
5. The inner factory returns the variables given by `entity_name`.
6. The outer factory is niladic.
7. The outer factory has no closure.
8. The outer factory creates the necessary lexical scope for the inner
factory, so that the loaded code has the given configuration for
closure/globals.
9. The outer factory returns the inner factory.
Roughly speaking, the following code is generated:
from __future__ import future_feature_1
from __future__ import future_feature_2
...
def outer_factory():
closure_var_1 = None
closure_var_2 = None
...
def inner_factory(arg_1, arg_2, ...):
<<nodes>>
return entity
return inner_factory
The lexical scoping is created using dummy symbol declarations which create
local fariables in the body of the outer factory, so that the Python parser
correctly marks them as free non-global variables upon load (that is, it
creates cell slots for each symbol. Thes symbols are initialized with None,
but their values are not expected to be used; instead, the caller is expected
to replace them with the cells of the source entity. For more details, see:
https://docs.python.org/3/reference/executionmodel.html#binding-of-names
Args:
nodes: Tuple[ast.AST], the source code to wrap.
entity_name: Union[Text, ast.AST], the name of the principal entity that
`nodes` define.
inner_factory_name: Text, the name of the inner factory.
outer_factory_name: Text, the name of the outer factory.
closure_vars: Iterable[Text], names of the closure variables for the inner
factory.
factory_args: Iterable[Text], names of additional arguments for the
inner factory. Useful to configure variables that the converted code can
use. Typically, these are modules.
future_features: Iterable[Text], names of future statements to associate the
code with.
Returns:
ast.AST |
1429 | 1427 | _PythonFnFactory | tensorflow/tensorflow/python/autograph/pyct/transpiler.py | 147 | class | Helper object that wraps a Python function factory. |
1430 | 1428 | GenericTranspiler | tensorflow/tensorflow/python/autograph/pyct/transpiler.py | 227 | class | A generic transpiler for Python functions.
Its interface is the `transform` API, which can process Python function
objects. Internally, it handles parsing.
Users typically subclass this, customizing the `transform_ast` method. The
output of transformed_ast is returned directly by `transform`. Existing
methods like `transform_function` may also be overloaded.
Example:
class MyTransformer(GenericTranspiler):
def transform_ast(self, node, ctx):
result = <<transform node>>
return result
transformer = MyTransfomer()
result = transformer.transform(f, ...)
# result is the output |
1431 | 1429 | PyToPy | tensorflow/tensorflow/python/autograph/pyct/transpiler.py | 368 | class | A generic Python-to-Python transpiler.
Its `transform` method offers a function-in, function-out interface.
Internally, it takes care of parsing, caching and loading of the translated
code.
Users typically subclass this, overriding `transform_ast`.
Usually, instances of this class are singletons, since each instance manages
its own cache. The caching can be controlled by overriding `get_caching_key`.
Example:
class MyTransformer(PyToPy):
def transform_ast(self, node, ctx):
node = <<transform node, usually using ast.NodeTransformer classes>>
return node
transformer = MyTransfomer()
new_f, module, source_map = transformer.transform_function(f, ...)
# new_f is a function with signature identical to f
The transformed function has access to the same namespace as the original
function. To allow access to internal APIs, users may inject additional
symbols by overriding `get_extra_locals`. |
1432 | 1430 | FlipSignTransformer | tensorflow/tensorflow/python/autograph/pyct/transpiler_test.py | 30 | class | |
1433 | 1431 | TestTranspiler | tensorflow/tensorflow/python/autograph/pyct/transpiler_test.py | 38 | class | |
1434 | 1432 | PyToPyTest | tensorflow/tensorflow/python/autograph/pyct/transpiler_test.py | 55 | class | |
1435 | 1433 | DummyGensym | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 40 | class | A dumb gensym that suffixes a stem by sequential numbers from 1000. |
1436 | 1434 | ASTEdgePattern | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 60 | class | A pattern defining a type of AST edge.
This consists of three components:
- The type of the parent node, checked with isinstance,
- The name of the field, checked with string equality, and
- The type of the child node, also checked with isinstance.
If all three match, the whole pattern is considered to match.
In all three slots, the special value `anf.ANY` is treated as "match
anything". The internal nodes are produced from the `gast` library rather
than the standard `ast` module, which may affect `isinstance` checks. |
1437 | 1435 | AnfTransformer | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 89 | class | Performs the conversion to A-normal form (ANF). |
1438 | 1436 | _is_py2_name_constant | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 483 | function | |
1439 | 1437 | _is_trivial | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 487 | function | Returns whether to consider the given node 'trivial'.
The definition of 'trivial' is a node that can't meaningfully be pulled out
into its own assignment statement.
This is surprisingly difficult to do robustly across versions of Python and
gast, as the parsing of constants has changed, if I may, constantly.
Args:
node: An AST node to check for triviality
Returns:
trivial: A Python `bool` indicating whether the node is trivial. |
1440 | 1438 | transform | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 527 | function | Converts the given node to A-normal form (ANF).
The general idea of A-normal form: https://en.wikipedia.org/wiki/A-normal_form
The specific converters used here are based on Python AST semantics as
documented at https://greentreesnakes.readthedocs.io/en/latest/.
What exactly should be considered A-normal form for any given programming
language is not completely obvious. The transformation defined here is
therefore configurable as to which syntax to replace with a fresh variable and
which to leave be. The configuration is intentionally flexible enough to
define very precise variable insertion transformations, should that be
desired.
The configuration is a list of syntax rules, each of which is a 2-tuple:
- An `ASTEdgePattern` (which see) defining a type of AST edge, and
- Whether to transform children of such edges.
The special object `anf.ANY` may be used as a pattern that matches all edges.
Each replacement directive is one of three possible things:
- The object `anf.REPLACE`, meaning "Replace this child node with a variable",
- The object `anf.LEAVE`, meaning "Do not replace this child node with a
variable", or
- A Python callable. If a callable, it is called with the parent node, the
field name, and the child node, and must compute a boolean indicating
whether to transform the child node or not. The callable is free to use
whatever context information it chooses. The callable may be invoked more
than once on the same link, and must produce the same answer each time.
The syntax rules are tested in order, and the first match governs. If no rule
matches, the node is not transformed.
The above rules notwithstanding,
- Variable references are never replaced with (fresh) variables, as that would
accomplish nothing.
- The left-hand children of Assign and AugAssign nodes, and the children of
Del nodes, are never replaced with variables, as that would break their
semantics.
- The right-hand children of Assign nodes are never replaced with variables,
as the original assignment would still have to be present in the result
to define the new variable. (That is, there's no point in transforming
`x = sin(y)` into `tmp = sin(y); x = tmp`.)
- The right-hand children of AugAssign nodes are never replaced with variables
either, but only because the difference from Assign was considered a
potential source of confusion (and it would have been slightly awkward in
the code to treat the RHS differently than the LHS).
- Various special-purpose AST nodes are not exposed to the configuration, lest
the transform produce invalid syntax like, e.g., `tmp = +; x = 1 tmp 2`.
For example, the configuration
```python
[(anf.ASTEdgePattern(anf.ANY, anf.ANY, gast.expr), anf.REPLACE)]
```
gives explicit fresh names to all expressions regardless of context (except as
outlined above), whereas
```python
[(anf.ASTEdgePattern(gast.If, "test", anf.ANY), anf.REPLACE)]
```
only transforms the conditionals of `if` statements (but not, e.g., `while`).
If no configuration is supplied, the default behavior is to transform all
expressions except literal constants, which is defined as a configuration as
```python
# For Python 3, and gast library versions before 0.3
literals = (gast.Num, gast.Str, gast.Bytes, gast.NameConstant)
[(anf.ASTEdgePattern(anf.ANY, anf.ANY, literals), anf.LEAVE),
(anf.ASTEdgePattern(anf.ANY, anf.ANY, gast.expr), anf.REPLACE)]
```
Args:
node: The node to transform.
ctx: transformer.EntityInfo. TODO(mdan): What information does this
argument provide?
config: Optional ANF configuration. If omitted, ANF replaces all expression
expect literal constants. |
1441 | 1439 | exec_test_function | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf_test.py | 34 | function | |
1442 | 1440 | exec_expected_result | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf_test.py | 40 | function | |
1443 | 1441 | AnfTestBase | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf_test.py | 49 | class | |
1444 | 1442 | AnfTransformerTest | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf_test.py | 85 | class | |
1445 | 1443 | AnfNonTransformationTest | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf_test.py | 433 | class | Test that specifying "no transformation" does nothing.
Reuses all the examples of AnfTransformerTest by overriding
`assert_body_anfs_as_expected_`. |
1446 | 1444 | AnfConfiguredTest | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf_test.py | 454 | class | |
1447 | 1445 | Scope | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 36 | class | Encloses local symbol definition and usage information.
This can track for instance whether a symbol is modified in the current scope.
Note that scopes do not necessarily align with Python's scopes. For example,
the body of an if statement may be considered a separate scope.
Caution - the AST references held by this object are weak.
Scope objects are mutable during construction only, and must be frozen using
`Scope.finalize()` before use. Furthermore, a scope is consistent only after
all its children have been frozen. While analysing code blocks, scopes are
being gradually built, from the innermost scope outward. Freezing indicates
that the analysis of a code block is complete. Once frozen, mutation is no
longer allowed. `is_final` tracks whether the scope is frozen or not. Certain
properties, like `referenced`, are only accurate when called on frozen scopes.
Attributes:
parent: Optional[Scope], the parent scope, if any.
isolated: bool, whether the scope is a true Python scope (e.g. the scope of
a function), or just a surrogate tracking an ordinary code block. Using
the terminology of the Python 3 reference documentation, True roughly
represents an actual scope, whereas False represents an ordinary code
block.
function_name: Optional[str], name of the function owning this scope.
isolated_names: Set[qual_names.QN], identifiers that are isolated to this
scope (even if the scope is not isolated).
annotations: Set[qual_names.QN], identifiers used as type annotations
in this scope.
read: Set[qual_names.QN], identifiers read in this scope.
modified: Set[qual_names.QN], identifiers modified in this scope.
deleted: Set[qual_names.QN], identifiers deleted in this scope.
bound: Set[qual_names.QN], names that are bound to this scope. See
https://docs.python.org/3/reference/executionmodel.html#binding-of-names
for a precise definition.
globals: Set[qual_names.QN], names that are explicitly marked as global in
this scope. Note that this doesn't include free read-only vars bound to
global symbols.
nonlocals: Set[qual_names.QN], names that are explicitly marked as nonlocal
in this scope. Note that this doesn't include free read-only vars bound to
global symbols.
free_vars: Set[qual_names.QN], the free variables in this scope. See
https://docs.python.org/3/reference/executionmodel.html for a precise
definition.
params: WeakValueDictionary[qual_names.QN, ast.Node], function arguments
visible in this scope, mapped to the function node that defines them.
enclosing_scope: Scope, the innermost isolated scope that is a transitive
parent of this scope. May be the scope itself.
referenced: Set[qual_names.QN], the totality of the symbols used by this
scope and its parents.
is_final: bool, whether the scope is frozen or not.
Note - simple statements may never delete and modify a symbol at the same
time. However, compound ones like if statements can. In that latter case, it's
undefined whether the symbol is actually modified or deleted upon statement
exit. Certain analyses like reaching definitions need to be careful about
this. |
1448 | 1446 | _Comprehension | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 214 | class | |
1449 | 1447 | _FunctionOrClass | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 224 | class | |
1450 | 1448 | ActivityAnalyzer | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 230 | class | Annotates nodes with local scope information.
See Scope.
The use of this class requires that qual_names.resolve() has been called on
the node. This class will ignore nodes have not been
annotated with their qualified names. |
1451 | 1449 | resolve | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 707 | function | |
1452 | 1450 | ActivityAnalyzerTest | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity_py3_test.py | 31 | class | Tests which can only run in Python 3. |
1453 | 1451 | ScopeTest | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity_test.py | 41 | class | |
1454 | 1452 | ActivityAnalyzerTestBase | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity_test.py | 114 | class | |
1455 | 1453 | ActivityAnalyzerTest | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity_test.py | 148 | class | |
1456 | 1454 | NoValue | tensorflow/tensorflow/python/autograph/pyct/static_analysis/annos.py | 27 | class | |
1457 | 1455 | NodeAnno | tensorflow/tensorflow/python/autograph/pyct/static_analysis/annos.py | 33 | class | Additional annotations used by the static analyzer.
These are in addition to the basic annotations declared in anno.py. |
1458 | 1456 | Analyzer | tensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness.py | 40 | class | CFG visitor that performs liveness analysis at statement level. |
1459 | 1457 | TreeAnnotator | tensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness.py | 96 | class | Runs liveness analysis on each of the functions defined in the AST.
If a function defined other local functions, those will have separate CFGs.
However, dataflow analysis needs to tie up these CFGs to properly emulate the
effect of closures. In the case of liveness, the parent function's live
variables must account for the variables that are live at the entry of each
subfunction. For example:
def foo():
# baz is live from here on
def bar():
print(baz)
This analyzer runs liveness analysis on each individual function, accounting
for the effect above. |
1460 | 1458 | resolve | tensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness.py | 206 | function | Resolves the live symbols at the exit of control flow statements.
Args:
node: ast.AST
source_info: transformer.SourceInfo
graphs: Dict[ast.FunctionDef, cfg.Graph]
include_annotations: Bool, whether type annotations should be included in
the analysis.
Returns:
ast.AST |
1461 | 1459 | LivenessAnalyzerTest | tensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness_py3_test.py | 30 | class | Tests which can only run in Python 3. |
1462 | 1460 | LivenessAnalyzerTestBase | tensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness_test.py | 37 | class | |
1463 | 1461 | LivenessAnalyzerTest | tensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness_test.py | 76 | class | |
1464 | 1462 | Definition | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py | 40 | class | Definition objects describe a unique definition of a variable.
Subclasses of this may be used by passing an appropriate factory function to
resolve.
Attributes:
param_of: Optional[ast.AST]
directives: Dict, optional definition annotations |
1465 | 1463 | _NodeState | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py | 59 | class | Abstraction for the state of the CFG walk for reaching definition analysis.
This is a value type. Only implements the strictly necessary operators.
Attributes:
value: Dict[qual_names.QN, Set[Definition, ...]], the defined symbols and
their possible definitions |
1466 | 1464 | Analyzer | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py | 112 | class | CFG visitor that determines reaching definitions at statement level. |
1467 | 1465 | TreeAnnotator | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py | 169 | class | AST visitor that annotates each symbol name with its reaching definitions.
Simultaneously, the visitor runs the dataflow analysis on each function node,
accounting for the effect of closures. For example:
def foo():
bar = 1
def baz():
# bar = 1 reaches here |
1468 | 1466 | resolve | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py | 279 | function | Resolves reaching definitions for each symbol.
Args:
node: ast.AST
source_info: transformer.SourceInfo
graphs: Dict[ast.FunctionDef, cfg.Graph]
definition_factory: Callable[[], Definition]
Returns:
ast.AST |
1469 | 1467 | ReachingDefinitionsAnalyzerTest | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions_py3_test.py | 26 | class | Tests which can only run in Python 3. |
1470 | 1468 | ReachingDefinitionsAnalyzerTestBase | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions_test.py | 38 | class | |
1471 | 1469 | ReachingDefinitionsAnalyzerTest | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions_test.py | 88 | class | |
1472 | 1470 | Definition | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_fndefs.py | 32 | class | Definition objects describe a unique definition of a function. |
1473 | 1471 | _NodeState | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_fndefs.py | 39 | class | Abstraction for the state of the CFG walk for reaching definition analysis.
This is a value type. Only implements the strictly necessary operators.
Attributes:
value: Dict[qual_names.QN, Set[Definition, ...]], the defined symbols and
their possible definitions |
1474 | 1472 | Analyzer | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_fndefs.py | 76 | class | CFG visitor that determines reaching definitions at statement level. |
1475 | 1473 | TreeAnnotator | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_fndefs.py | 109 | class | AST visitor that annotates each symbol name with its reaching definitions.
Simultaneously, the visitor runs the dataflow analysis on each function node,
accounting for the effect of closures. For example:
def foo():
def f():
pass
def g():
# `def f` reaches here |
1476 | 1474 | resolve | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_fndefs.py | 170 | function | Resolves reaching definitions for each symbol.
Args:
node: ast.AST
source_info: transformer.SourceInfo
graphs: Dict[ast.FunctionDef, cfg.Graph]
Returns:
ast.AST |
1477 | 1475 | ReachingFndefsAnalyzerTest | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_fndefs_test.py | 33 | class | |
1478 | 1476 | Resolver | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 41 | class | Resolver objects handle the process of looking up actual names and types.
All resolve_* methods:
* have a first namespace argument, mapping string to actual values
* specify names as QN objects
* specify types as a Set of inferred types
All resolve_* methods must return either:
* a set of `type` objects
* None |
1479 | 1477 | _SymbolTable | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 83 | class | Abstraction for the state of the CFG walk for type inference.
This is a value type. Only implements the strictly necessary operators.
Attributes:
value: Dict[qual_names.QN, Set[Type]], mapping symbols to the set of
possible types. |
1480 | 1478 | StmtInferrer | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 162 | class | Runs type inference on a single AST statement.
This visitor annotates most nodes with type information. It also sets types
for the symbols modified by this statement in its types_out property. |
1481 | 1479 | Analyzer | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 329 | class | CFG visitor that propagates type information across statements. |
1482 | 1480 | FunctionVisitor | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 394 | class | AST visitor that applies type inference to each function separately. |
1483 | 1481 | resolve | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 417 | function | Performs type inference.
Args:
node: ast.AST
source_info: transformer.SourceInfo
graphs: Dict[ast.FunctionDef, cfg.Graph]
resolver: Resolver
Returns:
ast.AST |
1484 | 1482 | TestResolver | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference_test.py | 32 | class | A very basic resolver for testing. |
1485 | 1483 | TestTranspiler | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference_test.py | 58 | class | |
1486 | 1484 | TypeInferenceAnalyzerTest | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference_test.py | 77 | class | |
1487 | 1485 | simple_function | tensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py | 23 | function | Docstring. |
1488 | 1486 | nested_functions | tensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py | 28 | function | Docstring. |
1489 | 1487 | function_with_print | tensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py | 37 | function | |
1490 | 1488 | SimpleClass | tensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py | 44 | class | |
1491 | 1489 | function_with_multiline_call | tensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py | 53 | function | Docstring. |
1492 | 1490 | basic_decorator | tensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py | 61 | function | |
1493 | 1491 | decorated_function | tensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py | 67 | function | |
1494 | 1492 | NodeSampler | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 30 | class | |
1495 | 1493 | StatementSampler | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 39 | class | |
1496 | 1494 | ExpressionSampler | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 49 | class | |
1497 | 1495 | CompareSampler | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 58 | class | |
1498 | 1496 | BinaryOpSampler | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 71 | class | |
1499 | 1497 | UnaryOpSampler | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 83 | class | |
1500 | 1498 | NameSampler | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 87 | class | |
1501 | 1499 | CodeGenerator | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 98 | class | Generate random syntactically-valid Python ASTs. |
1502 | 1500 | generate_random_functiondef | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 233 | function | |
1503 | 1501 | CodeGenTest | tensorflow/tensorflow/python/autograph/pyct/testing/codegen_test.py | 28 | class | |
1504 | 1502 | wrapping_decorator | tensorflow/tensorflow/python/autograph/pyct/testing/decorators.py | 24 | function | |
1505 | 1503 | standalone_decorator | tensorflow/tensorflow/python/autograph/pyct/testing/decorators.py | 33 | function | |
1506 | 1504 | functional_decorator | tensorflow/tensorflow/python/autograph/pyct/testing/decorators.py | 41 | function | |
1507 | 1505 | set_verbosity | tensorflow/tensorflow/python/autograph/utils/ag_logging.py | 41 | function | Sets the AutoGraph verbosity level.
_Debug logging in AutoGraph_
More verbose logging is useful to enable when filing bug reports or doing
more in-depth debugging.
There are two means to control the logging verbosity:
* The `set_verbosity` function
* The `AUTOGRAPH_VERBOSITY` environment variable
`set_verbosity` takes precedence over the environment variable.
For example:
```python
import os
import tensorflow as tf
os.environ['AUTOGRAPH_VERBOSITY'] = 5
# Verbosity is now 5
tf.autograph.set_verbosity(0)
# Verbosity is now 0
os.environ['AUTOGRAPH_VERBOSITY'] = 1
# No effect, because set_verbosity was already called.
```
Logs entries are output to [absl](https://abseil.io)'s
[default output](https://abseil.io/docs/python/guides/logging),
with `INFO` level.
Logs can be mirrored to stdout by using the `alsologtostdout` argument.
Mirroring is enabled by default when Python runs in interactive mode.
Args:
level: int, the verbosity level; larger values specify increased verbosity;
0 means no logging. When reporting bugs, it is recommended to set this
value to a larger number, like 10.
alsologtostdout: bool, whether to also output log messages to `sys.stdout`. |
1508 | 1506 | trace | tensorflow/tensorflow/python/autograph/utils/ag_logging.py | 92 | function | Traces argument information at compilation time.
`trace` is useful when debugging, and it always executes during the tracing
phase, that is, when the TF graph is constructed.
_Example usage_
```python
import tensorflow as tf
for i in tf.range(10):
tf.autograph.trace(i)
# Output: <Tensor ...>
```
Args:
*args: Arguments to print to `sys.stdout`. |
1509 | 1507 | get_verbosity | tensorflow/tensorflow/python/autograph/utils/ag_logging.py | 114 | function | |
1510 | 1508 | has_verbosity | tensorflow/tensorflow/python/autograph/utils/ag_logging.py | 121 | function | |
1511 | 1509 | _output_to_stdout | tensorflow/tensorflow/python/autograph/utils/ag_logging.py | 125 | function | |
1512 | 1510 | error | tensorflow/tensorflow/python/autograph/utils/ag_logging.py | 131 | function | |
1513 | 1511 | log | tensorflow/tensorflow/python/autograph/utils/ag_logging.py | 138 | function | |
1514 | 1512 | warn | tensorflow/tensorflow/python/autograph/utils/ag_logging.py | 145 | function | |
1515 | 1513 | BasicRef | tensorflow/tensorflow/python/autograph/utils/compat_util.py | 27 | class | This shim emulates the nonlocal keyword in Py2-compatible source. |
1516 | 1514 | deprecated_py2_support | tensorflow/tensorflow/python/autograph/utils/compat_util.py | 34 | function | Swaps calling module with a Py2-specific implementation. Noop in Py3. |
1517 | 1515 | control_dependency_on_returns | tensorflow/tensorflow/python/autograph/utils/context_managers.py | 27 | function | Create a TF control dependency on the return values of a function.
If the function had no return value, a no-op context is returned.
Args:
return_value: The return value to set as control dependency.
Returns:
A context manager. |
1518 | 1516 | ContextManagersTest | tensorflow/tensorflow/python/autograph/utils/context_managers_test.py | 28 | class | |
1519 | 1517 | alias_tensors | tensorflow/tensorflow/python/autograph/utils/misc.py | 27 | function | Wraps any Tensor arguments with an identity op.
Any other argument, including Variables, is returned unchanged.
Args:
*args: Any arguments. Must contain at least one element.
Returns:
Same as *args, with Tensor instances replaced as described.
Raises:
ValueError: If args doesn't meet the requirements. |
1520 | 1518 | get_range_len | tensorflow/tensorflow/python/autograph/utils/misc.py | 55 | function | |
1521 | 1519 | MiscTest | tensorflow/tensorflow/python/autograph/utils/misc_test.py | 30 | class | |
1522 | 1520 | MatchDType | tensorflow/tensorflow/python/autograph/utils/py_func.py | 28 | class | Allows matching the dtype of an argument.
Used in conjunction with function calls. For example, MatchDType(0) will
match the DType of the first argument. |
1523 | 1521 | wrap_py_func | tensorflow/tensorflow/python/autograph/utils/py_func.py | 38 | function | Helper that wraps a callable to py_func.
The helper passes tensor arguments through the py_func interface. Non-tensor
arguments are allowed, and will be passed to f directly. Note that non-tensor
arguments are captured by f will not update every time the wrapper is
called (this is consistent with its argument list, which only includes
the tensor arguments). In general, it's safest not to reuse this wrapper.
Args:
f: Callable
return_dtypes: None, individual of tuple/list of DType or MatchDType, the
data type for each of f's return value(s). Set to None if f has no
return values or use_dummy_return is True. Use MatchDType to define a
dtype identical to that of `i`th argument (argument 0 is the first);
an argument must of Tensor type if it is to be used with MatchDType.
args: Positional arguments for f, as list or tuple.
kwargs: Keyword arguments for f, as dict with string keys. May be None.
use_dummy_return: If True, the function will return a dummy value of 1
and discard its actual return value.
Returns:
The return values of f converted to tensor.
Raises:
ValueError: if any of the arguments are incorrect. |
1524 | 1522 | PyFuncTest | tensorflow/tensorflow/python/autograph/utils/py_func_test.py | 27 | class | |
1525 | 1523 | dynamic_list_append | tensorflow/tensorflow/python/autograph/utils/tensor_list.py | 26 | function | Converts a list append call inline. |
1526 | 1524 | TensorList | tensorflow/tensorflow/python/autograph/utils/tensor_list.py | 43 | class | Tensor list wrapper API-compatible with Python built-in list. |
1527 | 1525 | TensorListTest | tensorflow/tensorflow/python/autograph/utils/tensor_list_test.py | 32 | class | |
1528 | 1526 | is_dense_tensor | tensorflow/tensorflow/python/autograph/utils/tensors.py | 32 | function | |
1529 | 1527 | is_tensor_array | tensorflow/tensorflow/python/autograph/utils/tensors.py | 38 | function | |
1530 | 1528 | is_tensor_list | tensorflow/tensorflow/python/autograph/utils/tensors.py | 42 | function | |
1531 | 1529 | is_range_tensor | tensorflow/tensorflow/python/autograph/utils/tensors.py | 51 | function | Returns True if a tensor is the result of a tf.range op. Best effort. |
1532 | 1530 | TensorsTest | tensorflow/tensorflow/python/autograph/utils/tensors_test.py | 30 | class | |
1533 | 1531 | AutoGraphTestCase | tensorflow/tensorflow/python/autograph/utils/testing.py | 30 | class | Tests specialized for AutoGraph, which run as tf.functions.
These tests use a staged programming-like approach: most of the test code runs
as-is inside a tf.function, but the assertions are lifted outside the
function, and run with the corresponding function values instead.
For example, the test:
def test_foo(self):
baz = bar();
self.assertEqual(baz, value)
is equivalent to writing:
def test_foo(self):
@tf.function
def test_fn():
baz = bar();
return baz, value
baz_actual, value_actual = test_fn()
self.assertEqual(baz_actual, value_actual) |
1534 | 1532 | list_local_devices | tensorflow/tensorflow/python/client/device_lib.py | 25 | function | List the available devices available in the local process.
Args:
session_config: a session config proto or None to use the default config.
Returns:
A list of `DeviceAttribute` protocol buffers. |
1535 | 1533 | DeviceLibTest | tensorflow/tensorflow/python/client/device_lib_test.py | 28 | class | |
1536 | 1534 | PywrapeventsWriterTest | tensorflow/tensorflow/python/client/events_writer_test.py | 33 | class | |
1537 | 1535 | main | tensorflow/tensorflow/python/client/notebook.py | 53 | function | |
1538 | 1536 | TF_NewSessionOptions | tensorflow/tensorflow/python/client/pywrap_tf_session.py | 51 | function | |
1539 | 1537 | TF_Reset | tensorflow/tensorflow/python/client/pywrap_tf_session.py | 65 | function | |
1540 | 1538 | SessionInterface | tensorflow/tensorflow/python/client/session.py | 51 | class | Base class for implementations of TensorFlow client sessions. |
1541 | 1539 | _get_indexed_slices_value_from_fetches | tensorflow/tensorflow/python/client/session.py | 77 | function | |
1542 | 1540 | _get_feeds_for_indexed_slices | tensorflow/tensorflow/python/client/session.py | 83 | function | |
1543 | 1541 | _convert_to_numpy_obj | tensorflow/tensorflow/python/client/session.py | 139 | function | Explicitly convert obj based on numpy type except for string type. |
1544 | 1542 | register_session_run_conversion_functions | tensorflow/tensorflow/python/client/session.py | 144 | function | Register fetch and feed conversion functions for `tf.Session.run()`.
This function registers a triple of conversion functions for fetching and/or
feeding values of user-defined types in a call to tf.Session.run().
An example
```python
class SquaredTensor(object):
def __init__(self, tensor):
self.sq = tf.square(tensor)
#you can define conversion functions as follows:
fetch_function = lambda squared_tensor:([squared_tensor.sq],
lambda val: val[0])
feed_function = lambda feed, feed_val: [(feed.sq, feed_val)]
feed_function_for_partial_run = lambda feed: [feed.sq]
#then after invoking this register function, you can use as follows:
session.run(squared_tensor1,
feed_dict = {squared_tensor2 : some_numpy_array})
```
Args:
tensor_type: The type for which you want to register a conversion function.
fetch_function: A callable that takes an object of type `tensor_type` and
returns a tuple, where the first element is a list of `tf.Tensor` objects,
and the second element is a callable that takes a list of ndarrays and
returns an object of some value type that corresponds to `tensor_type`.
fetch_function describes how to expand fetch into its component Tensors
and how to contract the fetched results back into a single return value.
feed_function: A callable that takes feed_key and feed_value as input, and
returns a list of tuples (feed_tensor, feed_val), feed_key must have type
`tensor_type`, and feed_tensor must have type `tf.Tensor`. Each feed
function describes how to unpack a single fed value and map it to feeds of
one or more tensors and their corresponding values.
feed_function_for_partial_run: A callable for specifying tensor values to
feed when setting up a partial run, which takes a `tensor_type` type
object as input, and returns a list of Tensors.
Raises:
ValueError: If `tensor_type` has already been registered. |
1545 | 1543 | _is_attrs_instance | tensorflow/tensorflow/python/client/session.py | 199 | function | Returns True if the given obj is an instance of attrs-decorated class. |
1546 | 1544 | _get_attrs_values | tensorflow/tensorflow/python/client/session.py | 204 | function | Returns the list of values from an attrs instance. |
1547 | 1545 | _FetchMapper | tensorflow/tensorflow/python/client/session.py | 210 | class | Definition of the interface provided by fetch mappers.
Fetch mappers are utility classes used by the _FetchHandler to handle
arbitrary structures for the `fetch` argument to `Session.run()`.
The `fetch` argument can be of various shapes: single tensor or op, list of
fetches, tuple of fetches, namedtuple of fetches, or dict of fetches. The
structures can be arbitrarily nested.
The low level run() API only wants a list of tensor or op names. The various
`_FetchMapper` subclasses below take care of handling the different shapes:
uniquifying the fetches, and constructing results with the original shape. |
1548 | 1546 | _ElementFetchMapper | tensorflow/tensorflow/python/client/session.py | 282 | class | Fetch mapper for singleton tensors and ops. |
1549 | 1547 | _uniquify_fetches | tensorflow/tensorflow/python/client/session.py | 329 | function | Uniquifies fetches from a list of fetch_mappers.
This is a utility function used by _ListFetchMapper and _DictFetchMapper. It
gathers all the unique fetches from a list of mappers and builds a list
containing all of them but without duplicates (unique_fetches).
It also returns a 2-D list of integers (values_indices) indicating at which
index in unique_fetches the fetches of the mappers are located.
This list is as follows:
values_indices[mapper_index][mapper_fetch_index] = unique_fetches_index
Args:
fetch_mappers: list of fetch mappers.
Returns:
A list of fetches.
A 2-D list of integers. |
1550 | 1548 | _ListFetchMapper | tensorflow/tensorflow/python/client/session.py | 365 | class | Fetch mapper for lists, tuples, and namedtuples. |
1551 | 1549 | _DictFetchMapper | tensorflow/tensorflow/python/client/session.py | 399 | class | Fetch mapper for dicts. |
1552 | 1550 | _AttrsFetchMapper | tensorflow/tensorflow/python/client/session.py | 425 | class | Fetch mapper for attrs decorated classes. |
1553 | 1551 | _FetchHandler | tensorflow/tensorflow/python/client/session.py | 449 | class | Handler for structured fetches.
Given a graph, a user-provided structure for fetches, and a feed dict, this
class takes care of generating a list of tensor names to fetch and op names
to run for a low level `run()` call.
Given the results of the low level run call, this class can also rebuild a
result structure matching the user-provided structure for fetches, but
containing the corresponding results. |
1554 | 1552 | _name_list | tensorflow/tensorflow/python/client/session.py | 573 | function | Utility function for transitioning to the new session API.
Args:
tensor_list: a list of `Tensor`s.
Returns:
A list of each `Tensor`s name (as byte arrays). |
1555 | 1553 | _DeviceAttributes | tensorflow/tensorflow/python/client/session.py | 585 | class | Struct-like object describing a device's attributes.
Each device has 3 key properties:
- name: the fully-qualified TensorFlow path to the device. For
example: /job:worker/replica:0/task:3/device:CPU:0
- device_type: the type of the device (e.g. CPU, GPU, TPU, etc.)
- memory_limit_bytes: the maximum amount of memory available on the device
(in bytes). |
1556 | 1554 | BaseSession | tensorflow/tensorflow/python/client/session.py | 627 | class | A class for interacting with a TensorFlow computation.
The BaseSession enables incremental graph building with inline
execution of Operations and evaluation of Tensors. |
1557 | 1555 | Session | tensorflow/tensorflow/python/client/session.py | 1509 | class | A class for running TensorFlow operations.
A `Session` object encapsulates the environment in which `Operation`
objects are executed, and `Tensor` objects are evaluated. For
example:
```python
tf.compat.v1.disable_eager_execution() # need to disable eager in TF2.x
# Build a graph.
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
# Launch the graph in a session.
sess = tf.compat.v1.Session()
# Evaluate the tensor `c`.
print(sess.run(c)) # prints 30.0
```
A session may own resources, such as
`tf.Variable`, `tf.queue.QueueBase`,
and `tf.compat.v1.ReaderBase`. It is important to release
these resources when they are no longer required. To do this, either
invoke the `tf.Session.close` method on the session, or use
the session as a context manager. The following two examples are
equivalent:
```python
# Using the `close()` method.
sess = tf.compat.v1.Session()
sess.run(...)
sess.close()
# Using the context manager.
with tf.compat.v1.Session() as sess:
sess.run(...)
```
The
[`ConfigProto`](https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto)
protocol buffer exposes various configuration options for a
session. For example, to create a session that uses soft constraints
for device placement, and log the resulting placement decisions,
create a session as follows:
```python
# Launch the graph in a session that allows soft device placement and
# logs the placement decisions.
sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(
allow_soft_placement=True,
log_device_placement=True))
``` |
1558 | 1556 | InteractiveSession | tensorflow/tensorflow/python/client/session.py | 1679 | class | A TensorFlow `Session` for use in interactive contexts, such as a shell.
The only difference with a regular `Session` is that an `InteractiveSession`
installs itself as the default session on construction.
The methods `tf.Tensor.eval`
and `tf.Operation.run`
will use that session to run ops.
This is convenient in interactive shells and [IPython
notebooks](http://ipython.org), as it avoids having to pass an explicit
`Session` object to run ops.
For example:
```python
sess = tf.compat.v1.InteractiveSession()
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
# We can just use 'c.eval()' without passing 'sess'
print(c.eval())
sess.close()
```
Note that a regular session installs itself as the default session when it
is created in a `with` statement. The common usage in non-interactive
programs is to follow that pattern:
```python
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
with tf.compat.v1.Session():
# We can also use 'c.eval()' here.
print(c.eval())
``` |
1559 | 1557 | SessionBenchmark | tensorflow/tensorflow/python/client/session_benchmark.py | 36 | class | Tests and benchmarks for interacting with the `tf.compat.v1.Session`. |
1560 | 1558 | SessionClusterSpecPropagationTest | tensorflow/tensorflow/python/client/session_clusterspec_prop_test.py | 45 | class | |
1561 | 1559 | SessionListDevicesTest | tensorflow/tensorflow/python/client/session_list_devices_test.py | 33 | class | |
1562 | 1560 | PartialRunTest | tensorflow/tensorflow/python/client/session_partial_run_test.py | 35 | class | |
1563 | 1561 | SessionTest | tensorflow/tensorflow/python/client/session_test.py | 72 | class | |
1564 | 1562 | AllocationMaximum | tensorflow/tensorflow/python/client/timeline.py | 32 | class | Stores the maximum allocation for a given allocator within the timelne.
Parameters:
timestamp: `tensorflow::Env::NowMicros()` when this maximum was reached.
num_bytes: the total memory used at this time.
tensors: the set of tensors allocated at this time. |
1565 | 1563 | StepStatsAnalysis | tensorflow/tensorflow/python/client/timeline.py | 44 | class | Stores the step stats analysis output.
Parameters:
chrome_trace: A dict containing the chrome trace analysis.
allocator_maximums: A dict mapping allocator names to AllocationMaximum. |
1566 | 1564 | _ChromeTraceFormatter | tensorflow/tensorflow/python/client/timeline.py | 55 | class | A helper class for generating traces in Chrome Trace Format. |
1567 | 1565 | _TensorTracker | tensorflow/tensorflow/python/client/timeline.py | 265 | class | An internal class to track the lifetime of a Tensor. |
1568 | 1566 | Timeline | tensorflow/tensorflow/python/client/timeline.py | 346 | class | A class for visualizing execution timelines of TensorFlow steps. |
1569 | 1567 | TimelineTest | tensorflow/tensorflow/python/client/timeline_test.py | 34 | class | |
1570 | 1568 | VirtualGpuTestUtil | tensorflow/tensorflow/python/client/virtual_gpu_test.py | 38 | class | |
1571 | 1569 | VirtualGpuTest | tensorflow/tensorflow/python/client/virtual_gpu_test.py | 195 | class | |
1572 | 1570 | _date_to_date_number | tensorflow/tensorflow/python/compat/compat.py | 41 | function | |
1573 | 1571 | _update_forward_compatibility_date_number | tensorflow/tensorflow/python/compat/compat.py | 45 | function | Update the base date to compare in forward_compatible function. |
1574 | 1572 | forward_compatible | tensorflow/tensorflow/python/compat/compat.py | 70 | function | Return true if the forward compatibility window has expired.
See [Version
compatibility](https://tensorflow.org/guide/version_compat#backward_forward).
Forward-compatibility refers to scenarios where the producer of a TensorFlow
model (a GraphDef or SavedModel) is compiled against a version of the
TensorFlow library newer than what the consumer was compiled against. The
"producer" is typically a Python program that constructs and trains a model
while the "consumer" is typically another program that loads and serves the
model.
TensorFlow has been supporting a 3 week forward-compatibility window for
programs compiled from source at HEAD.
For example, consider the case where a new operation `MyNewAwesomeAdd` is
created with the intent of replacing the implementation of an existing Python
wrapper - `tf.add`. The Python wrapper implementation should change from
something like:
```python
def add(inputs, name=None):
return gen_math_ops.add(inputs, name)
```
to:
```python
from tensorflow.python.compat import compat
def add(inputs, name=None):
if compat.forward_compatible(year, month, day):
# Can use the awesome new implementation.
return gen_math_ops.my_new_awesome_add(inputs, name)
# To maintain forward compatibility, use the old implementation.
return gen_math_ops.add(inputs, name)
```
Where `year`, `month`, and `day` specify the date beyond which binaries
that consume a model are expected to have been updated to include the
new operations. This date is typically at least 3 weeks beyond the date
the code that adds the new operation is committed.
Args:
year: A year (e.g., 2018). Must be an `int`.
month: A month (1 <= month <= 12) in year. Must be an `int`.
day: A day (1 <= day <= 31, or 30, or 29, or 28) in month. Must be an
`int`.
Returns:
True if the caller can expect that serialized TensorFlow graphs produced
can be consumed by programs that are compiled with the TensorFlow library
source code after (year, month, day). |
1575 | 1573 | forward_compatibility_horizon | tensorflow/tensorflow/python/compat/compat.py | 131 | function | Context manager for testing forward compatibility of generated graphs.
See [Version
compatibility](https://tensorflow.org/guide/version_compat#backward_forward).
To ensure forward compatibility of generated graphs (see `forward_compatible`)
with older binaries, new features can be gated with:
```python
if compat.forward_compatible(year=2018, month=08, date=01):
generate_graph_with_new_features()
else:
generate_graph_so_older_binaries_can_consume_it()
```
However, when adding new features, one may want to unittest it before
the forward compatibility window expires. This context manager enables
such tests. For example:
```python
from tensorflow.python.compat import compat
def testMyNewFeature(self):
with compat.forward_compatibility_horizon(2018, 08, 02):
# Test that generate_graph_with_new_features() has an effect
```
Args:
year: A year (e.g., 2018). Must be an `int`.
month: A month (1 <= month <= 12) in year. Must be an `int`.
day: A day (1 <= day <= 31, or 30, or 29, or 28) in month. Must be an
`int`.
Yields:
Nothing. |
1576 | 1574 | CompatTest | tensorflow/tensorflow/python/compat/compat_test.py | 27 | class | |
1577 | 1575 | DisableV2BehaviorTest | tensorflow/tensorflow/python/compat/disable_v2_behavior_test.py | 27 | class | |
1578 | 1576 | enable_v2_behavior | tensorflow/tensorflow/python/compat/v2_compat.py | 43 | function | Enables TensorFlow 2.x behaviors.
This function can be called at the beginning of the program (before `Tensors`,
`Graphs` or other structures have been created, and before devices have been
initialized. It switches all global behaviors that are different between
TensorFlow 1.x and 2.x to behave as intended for 2.x.
This function is called in the main TensorFlow `__init__.py` file, user should
not need to call it, except during complex migrations. |
1579 | 1577 | disable_v2_behavior | tensorflow/tensorflow/python/compat/v2_compat.py | 82 | function | Disables TensorFlow 2.x behaviors.
This function can be called at the beginning of the program (before `Tensors`,
`Graphs` or other structures have been created, and before devices have been
initialized. It switches all global behaviors that are different between
TensorFlow 1.x and 2.x to behave as intended for 1.x.
User can call this function to disable 2.x behavior during complex migrations. |
1580 | 1578 | convert_graph_def | tensorflow/tensorflow/python/compiler/mlir/mlir.py | 26 | function | Import a GraphDef and convert it to a textual MLIR module.
Args:
graph_def: An object of type graph_pb2.GraphDef or a textual proto
representation of a valid GraphDef.
pass_pipeline: A textual description of an MLIR Pass Pipeline to run on the
module, see MLIR documentation for the
[textual pass pipeline syntax](https://github.com/tensorflow/mlir/blob/master/g3doc/WritingAPass.md#textual-pass-pipeline-specification).
Returns:
A textual representation of the MLIR module corresponding to the graphdef.
Raises a RuntimeError on error. |
1581 | 1579 | MLIRImportTest | tensorflow/tensorflow/python/compiler/mlir/mlir_test.py | 26 | class | |
1582 | 1580 | _to_bytes | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 84 | function | Encode s if it is a sequence of chars. |
1583 | 1581 | _to_string | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 91 | function | Decode s if it is a sequence of bytes. |
1584 | 1582 | TrtPrecisionMode | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 98 | class | |
1585 | 1583 | TrtConversionParams | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 117 | class | Parameters that are used for TF-TRT conversion.
Fields:
rewriter_config_template: a template RewriterConfig proto used to create a
TRT-enabled RewriterConfig. If None, it will use a default one.
max_workspace_size_bytes: the maximum GPU temporary memory which the TRT
engine can use at execution time. This corresponds to the
'workspaceSize' parameter of nvinfer1::IBuilder::setMaxWorkspaceSize().
precision_mode: one the strings in
TrtPrecisionMode.supported_precision_modes().
minimum_segment_size: the minimum number of nodes required for a subgraph
to be replaced by TRTEngineOp.
is_dynamic_op: whether to generate dynamic TRT ops which will build the
TRT network and engine at run time. i.e. Since TensorRT version < 6.0
does not support dynamic dimensions other than the batch dimension, when
the TensorFlow graph has a non-batch dimension of dynamic size, we would
need to enable this option. This option should be set to True in TF 2.0.
maximum_cached_engines: max number of cached TRT engines for dynamic TRT
ops. Created TRT engines for a dynamic dimension are cached. This is the
maximum number of engines that can be cached. If the number of cached
engines is already at max but none of them supports the input shapes,
the TRTEngineOp will fall back to run the original TF subgraph that
corresponds to the TRTEngineOp.
use_calibration: this argument is ignored if precision_mode is not INT8.
If set to True, a calibration graph will be created to calibrate the
missing ranges. The calibration graph must be converted to an inference
graph by running calibration with calibrate(). If set to False,
quantization nodes will be expected for every tensor in the graph
(excluding those which will be fused). If a range is missing, an error
will occur. Please note that accuracy may be negatively affected if
there is a mismatch between which tensors TRT quantizes and which
tensors were trained with fake quantization.
max_batch_size: max size for the input batch. This parameter is only
effective when is_dynamic_op=False which is not supported in TF 2.0.
allow_build_at_runtime: whether to build TensorRT engines during runtime.
If no TensorRT engine can be found in cache that can handle the given
inputs during runtime, then a new TensorRT engine is built at runtime if
allow_build_at_runtime=True, and otherwise native TF is used. This
argument is only effective if is_dynamic_op=True. |
1586 | 1584 | _check_conversion_params | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 188 | function | Validate the provided TrtConversionParams.
Args:
conversion_params: a TrtConversionParams instance.
is_v2: whether we're getting a RewriterConfig for TF 2.0.
Raises:
TypeError: if any of the parameters are of unexpected type.
ValueError: if any of the parameters are of unexpected value. |
1587 | 1585 | _check_trt_version_compatibility | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 252 | function | Check compatibility of TensorRT version.
Raises:
RuntimeError: if the TensorRT library version is incompatible. |
1588 | 1586 | get_tensorrt_rewriter_config | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 292 | function | Returns a RewriterConfig proto for TRT transformation.
Args:
conversion_params: a TrtConversionParams instance.
is_v2: whether we're getting a RewriterConfig for TF 2.0.
disable_non_trt_optimizers: Turn off all default Grappler optimizers.
Returns:
A RewriterConfig proto which sets a TensorRTOptimizer to run Grappler.
Raises:
TypeError: if any of the parameters are of unexpected type.
ValueError: if any of the parameters are of unexpected value. |
1589 | 1587 | _get_canonical_engine_name | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 383 | function | |
1590 | 1588 | is_explicit_batch_mode_enabled | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 387 | function | Checks whether explicit batch is enabled by the rewriter config. |
1591 | 1589 | TrtGraphConverter | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 398 | class | A converter for TF-TRT transformation for TF 1.x GraphDef/SavedModels.
To run the conversion without quantization calibration (e.g. for FP32/FP16
precision modes):
```python
converter = TrtGraphConverter(
input_saved_model_dir="my_dir",
precision_mode=TrtPrecisionMode.FP16)
converted_graph_def = converter.convert()
converter.save(output_saved_model_dir)
```
To run the conversion with quantization calibration:
```python
converter = TrtGraphConverter(
input_saved_model_dir="my_dir",
precision_mode=TrtPrecisionMode.INT8)
converter.convert()
# Run calibration 10 times.
converted_graph_def = converter.calibrate(
fetch_names=['output:0'],
num_runs=10,
feed_dict_fn=lambda: {'input:0': my_next_data()})
converter.save(output_saved_model_dir)
``` |
1592 | 1590 | _get_resource_handle | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 833 | function | |
1593 | 1591 | _TRTEngineResourceDeleter | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 838 | class | Resource deleter for destroying TRT engine cache resource. |
1594 | 1592 | _TRTEngineResource | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 853 | class | Class to track the serialized engines resource. |
1595 | 1593 | TrtGraphConverterV2 | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 880 | class | An offline converter for TF-TRT transformation for TF 2.0 SavedModels.
Currently this is not available on Windows platform.
Note that in V2, is_dynamic_op=False is not supported, meaning TRT engines
will be built only when the corresponding TRTEngineOp is executed. But we
still provide a way to avoid the cost of building TRT engines during inference
(see more below).
There are several ways to run the conversion:
1. FP32/FP16 precision
```python
params = tf.experimental.tensorrt.ConversionParams(
precision_mode='FP16')
converter = tf.experimental.tensorrt.Converter(
input_saved_model_dir="my_dir", conversion_params=params)
converter.convert()
converter.save(output_saved_model_dir)
```
In this case, no TRT engines will be built or saved in the converted
SavedModel. But if input data is available during conversion, we can still
build and save the TRT engines to reduce the cost during inference (see
option 2 below).
2. FP32/FP16 precision with pre-built engines
```python
params = tf.experimental.tensorrt.ConversionParams(
precision_mode='FP16',
# Set this to a large enough number so it can cache all the engines.
maximum_cached_engines=16)
converter = tf.experimental.tensorrt.Converter(
input_saved_model_dir="my_dir", conversion_params=params)
converter.convert()
# Define a generator function that yields input data, and use it to execute
# the graph to build TRT engines.
# With TensorRT 5.1, different engines will be built (and saved later) for
# different input shapes to the TRTEngineOp.
def my_input_fn():
for _ in range(num_runs):
inp1, inp2 = ...
yield inp1, inp2
converter.build(input_fn=my_input_fn) # Generate corresponding TRT engines
converter.save(output_saved_model_dir) # Generated engines will be saved.
```
In this way, one engine will be built/saved for each unique input shapes of
the TRTEngineOp. This is good for applications that cannot afford building
engines during inference but have access to input data that is similar to
the one used in production (for example, that has the same input shapes).
Also, the generated TRT engines is platform dependent, so we need to run
`build()` in an environment that is similar to production (e.g. with
same type of GPU).
3. INT8 precision and calibration with pre-built engines
```python
params = tf.experimental.tensorrt.ConversionParams(
precision_mode='INT8',
# Currently only one INT8 engine is supported in this mode.
maximum_cached_engines=1,
use_calibration=True)
converter = tf.experimental.tensorrt.Converter(
input_saved_model_dir="my_dir", conversion_params=params)
# Define a generator function that yields input data, and run INT8
# calibration with the data. All input data should have the same shape.
# At the end of convert(), the calibration stats (e.g. range information)
# will be saved and can be used to generate more TRT engines with different
# shapes. Also, one TRT engine will be generated (with the same shape as
# the calibration data) for save later.
def my_calibration_input_fn():
for _ in range(num_runs):
inp1, inp2 = ...
yield inp1, inp2
converter.convert(calibration_input_fn=my_calibration_input_fn)
# (Optional) Generate more TRT engines offline (same as the previous
# option), to avoid the cost of generating them during inference.
def my_input_fn():
for _ in range(num_runs):
inp1, inp2 = ...
yield inp1, inp2
converter.build(input_fn=my_input_fn)
# Save the TRT engine and the engines.
converter.save(output_saved_model_dir)
``` |
1596 | 1594 | create_inference_graph | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 1270 | function | Python wrapper for the TRT transformation.
Args:
input_graph_def: a GraphDef object containing a model to be transformed. If
set to None, the graph will be read from the SavedModel loaded from
input_saved_model_dir.
outputs: list of tensors or node names for the model outputs. Only used when
input_graph_def is not None.
max_batch_size: max size for the input batch.
max_workspace_size_bytes: the maximum GPU temporary memory which the TRT
engine can use at execution time. This corresponds to the 'workspaceSize'
parameter of nvinfer1::IBuilder::setMaxWorkspaceSize().
precision_mode: one of TrtPrecisionMode.supported_precision_modes().
minimum_segment_size: the minimum number of nodes required for a subgraph to
be replaced by TRTEngineOp.
is_dynamic_op: whether to generate dynamic TRT ops which will build the TRT
network and engine at run time.
maximum_cached_engines: max number of cached TRT engines in dynamic TRT ops.
If the number of cached engines is already at max but none of them can
serve the input, the TRTEngineOp will fall back to run the TF function
based on which the TRTEngineOp is created.
input_saved_model_dir: the directory to load the SavedModel which contains
the input graph to transforms. Used only when input_graph_def is None.
input_saved_model_tags: list of tags to load the SavedModel.
input_saved_model_signature_key: the key of the signature to optimize the
graph for.
output_saved_model_dir: if not None, construct a SavedModel using the
returned GraphDef and save it to the specified directory. This option only
works when the input graph is loaded from a SavedModel, i.e. when
input_saved_model_dir is specified and input_graph_def is None.
session_config: the ConfigProto used to create a Session. It's also used as
a template to create a TRT-enabled ConfigProto for conversion. If not
specified, a default ConfigProto will be used.
Returns:
A GraphDef transformed from input_graph_def (or the SavedModel graph def
loaded from input_saved_model_dir, if input_graph_def is not present), where
all TRT compatible subgraphs are replaced with TRTEngineOps, and a TF
function is added for each of the subgraphs.
If is_dynamic_op is True, each TRTEngineOp will contain a serialized
subgraph GraphDef, which will be converted to a TRT engine at execution time
and the TRT engine will be cached for future usage. A new TRT engine will be
created each time when none of the cached engines match the input shapes. If
it fails to execute the TRT engine or the number of cached engines reaches
maximum_cached_engines, the op will fall back to call the corresponding TF
function.
If is_dynamic_op is False, each TRTEngineOp will contain a serialized TRT
engine created from the corresponding subgraph. No more engines will be
created on the fly, and the op will fall back to call the corresponding TF
function when it fails to execute the engine.
Raises:
ValueError: if the combination of the parameters is invalid. |
1597 | 1595 | TrtConvertTest | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert_test.py | 67 | class | Class to test Tensorflow-TensorRT integration python API. |
1598 | 1596 | TrtPrecisionMode | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert_windows.py | 31 | class | |
1599 | 1597 | TrtConversionParams | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert_windows.py | 43 | class | Parameters that are used for TF-TRT conversion.
Fields:
rewriter_config_template: a template RewriterConfig proto used to create a
TRT-enabled RewriterConfig. If None, it will use a default one.
max_workspace_size_bytes: the maximum GPU temporary memory which the TRT
engine can use at execution time. This corresponds to the
'workspaceSize' parameter of nvinfer1::IBuilder::setMaxWorkspaceSize().
precision_mode: one the strings in
TrtPrecisionMode.supported_precision_modes().
minimum_segment_size: the minimum number of nodes required for a subgraph
to be replaced by TRTEngineOp.
is_dynamic_op: whether to generate dynamic TRT ops which will build the
TRT network and engine at run time. i.e. Since TensorRT version < 6.0
does not support dynamic dimensions other than the batch dimension, when
the TensorFlow graph has a non-batch dimension of dynamic size, we would
need to enable this option. This option should be set to True in TF 2.0.
maximum_cached_engines: max number of cached TRT engines for dynamic TRT
ops. Created TRT engines for a dynamic dimension are cached. This is the
maximum number of engines that can be cached. If the number of cached
engines is already at max but none of them supports the input shapes,
the TRTEngineOp will fall back to run the original TF subgraph that
corresponds to the TRTEngineOp.
use_calibration: this argument is ignored if precision_mode is not INT8.
If set to True, a calibration graph will be created to calibrate the
missing ranges. The calibration graph must be converted to an inference
graph by running calibration with calibrate(). If set to False,
quantization nodes will be expected for every tensor in the graph
(exlcuding those which will be fused). If a range is missing, an error
will occur. Please note that accuracy may be negatively affected if
there is a mismatch between which tensors TRT quantizes and which
tensors were trained with fake quantization.
max_batch_size: max size for the input batch. This parameter is only
effective when is_dynamic_op=False which is not supported in TF 2.0. |
1600 | 1598 | TrtConverterWindows | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert_windows.py | 97 | class | An offline converter for TF-TRT transformation for TF 2.0 SavedModels.
Currently this is not available on Windows platform. |
1601 | 1599 | SimpleSingleEngineTest | tensorflow/tensorflow/python/compiler/tensorrt/test/base_test.py | 34 | class | |
1602 | 1600 | SimpleMultiEnginesTest | tensorflow/tensorflow/python/compiler/tensorrt/test/base_test.py | 74 | class | |
1603 | 1601 | SimpleMultiEnginesTest2 | tensorflow/tensorflow/python/compiler/tensorrt/test/base_test.py | 134 | class | |
1604 | 1602 | ConstInputTest | tensorflow/tensorflow/python/compiler/tensorrt/test/base_test.py | 175 | class | |
1605 | 1603 | ConstDataInputSingleEngineTest | tensorflow/tensorflow/python/compiler/tensorrt/test/base_test.py | 211 | class | |
1606 | 1604 | ConstDataInputMultipleEnginesTest | tensorflow/tensorflow/python/compiler/tensorrt/test/base_test.py | 232 | class | |
1607 | 1605 | ControlDependencyTest | tensorflow/tensorflow/python/compiler/tensorrt/test/base_test.py | 264 | class | |
1608 | 1606 | BatchMatMulTwoTensorTest | tensorflow/tensorflow/python/compiler/tensorrt/test/batch_matmul_test.py | 32 | class | Testing conversion of BatchMatMul where both inputs are tensors. |
1609 | 1607 | BatchMatMulWeightBroadcastTest | tensorflow/tensorflow/python/compiler/tensorrt/test/batch_matmul_test.py | 50 | class | Testing BatchMatMulV2: one operand is weight and both have same rank. |
1610 | 1608 | BatchMatMulWeightBroadcastDims2Test | tensorflow/tensorflow/python/compiler/tensorrt/test/batch_matmul_test.py | 69 | class | Testing BatchMatMulV2: weight operand must be broadcasted. |
1611 | 1609 | BiasaddMatMulTest | tensorflow/tensorflow/python/compiler/tensorrt/test/biasadd_matmul_test.py | 33 | class | Testing conversion of BiasAdd MatMul in TF-TRT conversion. |
1612 | 1610 | BinaryTensorWeightBroadcastTest | tensorflow/tensorflow/python/compiler/tensorrt/test/binary_tensor_weight_broadcast_test.py | 30 | class | Tests for scale & elementwise layers in TF-TRT. |
1613 | 1611 | CastInt32ToFp32Test | tensorflow/tensorflow/python/compiler/tensorrt/test/cast_test.py | 31 | class | Tests cast to FP32 are splitted in FP16 mode. |
1614 | 1612 | CombinedNmsTest | tensorflow/tensorflow/python/compiler/tensorrt/test/combined_nms_test.py | 30 | class | Test for CombinedNMS op in TF-TRT. |
1615 | 1613 | ConcatenationTest | tensorflow/tensorflow/python/compiler/tensorrt/test/concatenation_test.py | 32 | class | Testing Concatenation in TF-TRT conversion. |
1616 | 1614 | ConstBroadcastTest | tensorflow/tensorflow/python/compiler/tensorrt/test/const_broadcast_test.py | 28 | class | Test for Constant broadcasting in TF-TRT. |
1617 | 1615 | conv2d_layer | tensorflow/tensorflow/python/compiler/tensorrt/test/conv2d_test.py | 32 | function | |
1618 | 1616 | div_round_up | tensorflow/tensorflow/python/compiler/tensorrt/test/conv2d_test.py | 62 | function | |
1619 | 1617 | build_graph | tensorflow/tensorflow/python/compiler/tensorrt/test/conv2d_test.py | 66 | function | |
1620 | 1618 | Conv2DNCHWTest | tensorflow/tensorflow/python/compiler/tensorrt/test/conv2d_test.py | 83 | class | Testing conversion of Conv2D (data_format=NCHW) in TF-TRT conversion. |
1621 | 1619 | Conv2DNHWCTest | tensorflow/tensorflow/python/compiler/tensorrt/test/conv2d_test.py | 118 | class | Testing conversion of Conv2D (data_format=NCHW) in TF-TRT conversion. |
1622 | 1620 | Conv2DStridedNCHWTest | tensorflow/tensorflow/python/compiler/tensorrt/test/conv2d_test.py | 141 | class | Testing conversion of strided Conv2D (data_format=NCHW). |
1623 | 1621 | Conv2DTranposeTest | tensorflow/tensorflow/python/compiler/tensorrt/test/conv2d_test.py | 172 | class | Testing conversion of conv2d_transpose (AKA Conv2DBackpropInput) |
1624 | 1622 | DynamicInputShapesTest | tensorflow/tensorflow/python/compiler/tensorrt/test/dynamic_input_shapes_test.py | 32 | class | |
1625 | 1623 | IdentityTest | tensorflow/tensorflow/python/compiler/tensorrt/test/identity_output_test.py | 36 | class | Testing engine with the same tensor repeated as output via identity. |
1626 | 1624 | ExcludeUnsupportedInt32Test | tensorflow/tensorflow/python/compiler/tensorrt/test/int32_test.py | 32 | class | Test exclusion of ops which are not supported in INT32 mode by TF-TRT |
1627 | 1625 | CalibrationInt32Support | tensorflow/tensorflow/python/compiler/tensorrt/test/int32_test.py | 68 | class | Test execution of calibration with int32 input |
1628 | 1626 | LRUCacheTest | tensorflow/tensorflow/python/compiler/tensorrt/test/lru_cache_test.py | 33 | class | |
1629 | 1627 | MemoryAlignmentTest | tensorflow/tensorflow/python/compiler/tensorrt/test/memory_alignment_test.py | 31 | class | Testing conversion of BatchMatMul in TF-TRT conversion. |
1630 | 1628 | MultiConnectionNeighborEngineTest | tensorflow/tensorflow/python/compiler/tensorrt/test/multi_connection_neighbor_engine_test.py | 31 | class | Test for multi connection neighboring nodes wiring tests in TF-TRT. |
1631 | 1629 | NeighboringEngineTest | tensorflow/tensorflow/python/compiler/tensorrt/test/neighboring_engine_test.py | 32 | class | Neighboring node wiring tests in TF-TRT conversion. |
1632 | 1630 | QuantizationAwareTrainingMNISTTest | tensorflow/tensorflow/python/compiler/tensorrt/test/quantization_mnist_test.py | 59 | class | Testing usage of quantization ranges inserted in graph. |
1633 | 1631 | _GraphFn | tensorflow/tensorflow/python/compiler/tensorrt/test/quantization_test.py | 33 | function | |
1634 | 1632 | _GetParams | tensorflow/tensorflow/python/compiler/tensorrt/test/quantization_test.py | 53 | function | |
1635 | 1633 | QuantizationMissingAllRangesTest | tensorflow/tensorflow/python/compiler/tensorrt/test/quantization_test.py | 57 | class | Create a graph containing single segment with no quantization ranges. |
1636 | 1634 | QuantizationWithRangesTest | tensorflow/tensorflow/python/compiler/tensorrt/test/quantization_test.py | 82 | class | Create a graph containing single segment with no quantization ranges. |
1637 | 1635 | NonQuantizedPrecisionsWithRangesTest | tensorflow/tensorflow/python/compiler/tensorrt/test/quantization_test.py | 110 | class | Create a graph containing single segment with no quantization ranges. |
1638 | 1636 | RankTwoTest | tensorflow/tensorflow/python/compiler/tensorrt/test/rank_two_test.py | 30 | class | Test for rank 2 input in TF-TRT. |
1639 | 1637 | ReshapeTest | tensorflow/tensorflow/python/compiler/tensorrt/test/reshape_transpose_test.py | 28 | class | |
1640 | 1638 | TransposeTest | tensorflow/tensorflow/python/compiler/tensorrt/test/reshape_transpose_test.py | 79 | class | |
1641 | 1639 | IncompatibleTransposeTest | tensorflow/tensorflow/python/compiler/tensorrt/test/reshape_transpose_test.py | 108 | class | |
1642 | 1640 | IsQuantizationMode | tensorflow/tensorflow/python/compiler/tensorrt/test/tf_trt_integration_test_base.py | 95 | function | |
1643 | 1641 | IsQuantizationWithCalibration | tensorflow/tensorflow/python/compiler/tensorrt/test/tf_trt_integration_test_base.py | 99 | function | |
1644 | 1642 | GraphState | tensorflow/tensorflow/python/compiler/tensorrt/test/tf_trt_integration_test_base.py | 103 | class | |
1645 | 1643 | TfTrtIntegrationTestBase | tensorflow/tensorflow/python/compiler/tensorrt/test/tf_trt_integration_test_base.py | 109 | class | Class to test Tensorflow-TensorRT integration. |
1646 | 1644 | _GetTestConfigsV1 | tensorflow/tensorflow/python/compiler/tensorrt/test/tf_trt_integration_test_base.py | 883 | function | Returns the config combinations to run the test. |
1647 | 1645 | _GetTestConfigsV2 | tensorflow/tensorflow/python/compiler/tensorrt/test/tf_trt_integration_test_base.py | 902 | function | Returns the config combinations to run the test. |
1648 | 1646 | _GetTest | tensorflow/tensorflow/python/compiler/tensorrt/test/tf_trt_integration_test_base.py | 928 | function | Gets a single test method based on the parameters. |
1649 | 1647 | _AddTestsFor | tensorflow/tensorflow/python/compiler/tensorrt/test/tf_trt_integration_test_base.py | 942 | function | Adds test methods to TfTrtIntegrationTestBase for specific TF version. |
1650 | 1648 | _AddTests | tensorflow/tensorflow/python/compiler/tensorrt/test/tf_trt_integration_test_base.py | 967 | function | Adds test methods to TfTrtIntegrationTestBase. |
1651 | 1649 | TopKTest | tensorflow/tensorflow/python/compiler/tensorrt/test/topk_test.py | 29 | class | Testing Top-K in TF-TRT conversion. |
1652 | 1650 | TopKOutputTypeTest | tensorflow/tensorflow/python/compiler/tensorrt/test/topk_test.py | 50 | class | Testing that output type of engine using Top-K is set correctly. |
1653 | 1651 | TrtModeTestBase | tensorflow/tensorflow/python/compiler/tensorrt/test/trt_mode_test.py | 31 | class | Test squeeze on batch dim and some unary operations in TF-TRT. |
1654 | 1652 | ImplicitBatchTest | tensorflow/tensorflow/python/compiler/tensorrt/test/trt_mode_test.py | 81 | class | |
1655 | 1653 | ExplicitBatchTest | tensorflow/tensorflow/python/compiler/tensorrt/test/trt_mode_test.py | 104 | class | |
1656 | 1654 | DynamicShapesTest | tensorflow/tensorflow/python/compiler/tensorrt/test/trt_mode_test.py | 140 | class | Test with dynamic input shapes.
DynamicShapesTest is different from ExplicitBatchTest in that it uses input
and output masks to change the input and output shapes to unknown shapes. |
1657 | 1655 | UnaryTest | tensorflow/tensorflow/python/compiler/tensorrt/test/unary_test.py | 33 | class | Test for unary operations in TF-TRT. |
1658 | 1656 | VGGBlockNCHWTest | tensorflow/tensorflow/python/compiler/tensorrt/test/vgg_block_nchw_test.py | 35 | class | Single vgg layer in NCHW unit tests in TF-TRT. |
1659 | 1657 | VGGBlockTest | tensorflow/tensorflow/python/compiler/tensorrt/test/vgg_block_test.py | 35 | class | Single vgg layer test in TF-TRT conversion. |
1660 | 1658 | GetGraph | tensorflow/tensorflow/python/compiler/tensorrt/test/testdata/gen_tftrt_model.py | 49 | function | Define graph. |
1661 | 1659 | GenerateModelV2 | tensorflow/tensorflow/python/compiler/tensorrt/test/testdata/gen_tftrt_model.py | 59 | function | Generate and convert a model using TFv2 API. |
1662 | 1660 | GenerateModelV1 | tensorflow/tensorflow/python/compiler/tensorrt/test/testdata/gen_tftrt_model.py | 90 | function | Generate and convert a model using TFv1 API. |
1663 | 1661 | ExperimentalCompileTest | tensorflow/tensorflow/python/compiler/xla/experimental_compile_test.py | 30 | class | |
1664 | 1662 | _XlaScope | tensorflow/tensorflow/python/compiler/xla/jit.py | 32 | class | Keeps track of previous XLA scope calls, and depth of current call. |
1665 | 1663 | experimental_jit_scope | tensorflow/tensorflow/python/compiler/xla/jit.py | 42 | function | Enable or disable JIT compilation of operators within the scope.
NOTE: This is an experimental feature.
The compilation is a hint and only supported on a best-effort basis.
Example usage:
```python
with tf.xla.experimental.jit_scope():
c = tf.matmul(a, b) # compiled
with tf.xla.experimental.jit_scope(compile_ops=False):
d = tf.matmul(a, c) # not compiled
with tf.xla.experimental.jit_scope(
compile_ops=lambda node_def: 'matmul' in node_def.op.lower()):
e = tf.matmul(a, b) + d # matmul is compiled, the addition is not.
```
Example of `separate_compiled_gradients`:
```python
# In the example below, the computations for f, g and h will all be compiled
# in separate scopes.
with tf.xla.experimental.jit_scope(
separate_compiled_gradients=True):
f = tf.matmul(a, b)
g = tf.gradients([f], [a, b], name='mygrads1')
h = tf.gradients([f], [a, b], name='mygrads2')
```
Args:
compile_ops: Whether to enable or disable compilation in the scope.
Either a Python bool, or a callable that accepts the parameter
`node_def` and returns a python bool.
separate_compiled_gradients: If true put each gradient subgraph into a
separate compilation scope. This gives fine-grained control over which
portions of the graph will be compiled as a single unit. Compiling
gradients separately may yield better performance for some graphs.
The scope is named based on the scope of the forward computation as well
as the name of the gradients. As a result, the gradients will be compiled
in a scope that is separate from both the forward computation, and from
other gradients.
Raises:
RuntimeError: if called when eager execution is enabled.
Yields:
The current scope, enabling or disabling compilation. |
1666 | 1664 | enable_jit_nonstateful | tensorflow/tensorflow/python/compiler/xla/jit_test.py | 39 | function | |
1667 | 1665 | JITTest | tensorflow/tensorflow/python/compiler/xla/jit_test.py | 47 | class | |
1668 | 1666 | CompilationEnabledInGradientTest | tensorflow/tensorflow/python/compiler/xla/jit_test.py | 187 | class | |
1669 | 1667 | compile | tensorflow/tensorflow/python/compiler/xla/xla.py | 67 | function | Builds an operator that compiles and runs `computation` with XLA.
NOTE: In eager mode, `computation` will have `@tf.function` semantics.
Args:
computation: A Python function that builds a computation to apply to the
input. If the function takes n inputs, 'inputs' should be a list of n
tensors.
`computation` may return a list of operations and tensors. Tensors must
come before operations in the returned list. The return value of
`compile` is a list of tensors corresponding to the tensors from the
output of `computation`.
All `Operation`s returned from `computation` will be executed when
evaluating any of the returned output tensors.
inputs: A list of inputs or `None` (equivalent to an empty list). Each input
can be a nested structure containing values that are convertible to
tensors. Note that passing an N-dimension list of compatible values will
result in a N-dimension list of scalar tensors rather than a single Rank-N
tensors. If you need different behavior, convert part of inputs to tensors
with `tf.convert_to_tensor`.
Returns:
Same data structure as if computation(*inputs) is called directly with some
exceptions for correctness. Exceptions include:
1) None output: a NoOp would be returned which control-depends on
computation.
2) Single value output: A tuple containing the value would be returned.
3) Operation-only outputs: a NoOp would be returned which
control-depends on computation.
TODO(b/121383831): Investigate into removing these special cases.
Raises:
RuntimeError: if called when eager execution is enabled.
Known issues:
When a tf.random operation is built with XLA, the implementation doesn't
pass the user provided seed to the XLA compiler. As such, the XLA compiler
generates a random number and uses it as a seed when compiling the
operation. This implementation causes a violation of the Tensorflow
defined semantics in two aspects. First, changing the value of the user
defined seed doesn't change the numbers generated by the operation.
Second, when a seed is not specified, running the program multiple times
will generate the same numbers. |
1670 | 1668 | XLACompileContext | tensorflow/tensorflow/python/compiler/xla/xla.py | 125 | class | A `ControlFlowContext` for nodes inside an XLA computation cluster.
THIS IS ONLY FOR TENSORFLOW INTERNAL IMPLEMENTATION, DO NO USE DIRECTLY.
The primary role of `XLACompileContext` is to mark operators inside a
xla.compile() computation with attribute "_xla_compile_id=XYZ", where XYZ is
a unique name.
`ControlFlowContext` is used to perform the annotation since it integrates
with Tensorflow constructs like ResourceVariables. For example, if a
`ResourceVariable` is constructed inside a xla.compile() block, the
`ResourceVariable` implementation can use
`with ops.control_dependencies(None)` to build the variable's definition
outside the compiled computation. |
1671 | 1669 | _compile_internal | tensorflow/tensorflow/python/compiler/xla/xla.py | 306 | function | Builds graph operators that compiles and symbolically executes computation.
Args:
computation: A Python function that builds the computation to compile and
execute.
inputs: A list of inputs or `None` (equivalent to an empty list). Each input
can be a nested structure containing values that are convertible to
tensors. Note that passing an N-dimension list of compatible values will
result in a N-dimension list of scalar tensors rather than a single Rank-N
tensors. If you need different behavior, convert part of inputs to tensors
with `tf.convert_to_tensor`.
Returns:
Same data structure as if computation(*inputs) is called directly with some
exceptions for correctness. Exceptions include: 1) None output 2) Single
value output 3) Operation-only outputs
Raises:
ValueError: If any element in computation outputs is neither an operations
or a value that can be converted to tensor.
ValueError: If computation outputs is non-flat and contains any Operations.
TypeError: If `inputs` is not a list or tuple. |
1672 | 1670 | is_flat | tensorflow/tensorflow/python/compiler/xla/xla.py | 409 | function | Checks if outputs is a flat structure.
Following structures and values are considered flat:
1) None
2) A single object
3) A list or tuple of Tensors/Operations
The only structures that this function understands are sequences,
dictionaries and types defined using the attrs library. E.g. this means
that if outputs contains a single user-defined Object, it is considered to
be flat. Errors are raised later on if that Object cannot be converted to a
Tensor.
Args:
outputs: Output from `computation` inside `xla.compile`.
Returns:
A boolean indicates whether outputs is flat. |
1673 | 1671 | _postprocess_flat_outputs | tensorflow/tensorflow/python/compiler/xla/xla.py | 451 | function | Validates flat outputs and adds back device assignments.
Args:
outputs: Output from `computation` inside `xla.compile`.
Returns:
Tensors and Operations extracted from outputs. |
1674 | 1672 | _postprocess_non_flat_outputs | tensorflow/tensorflow/python/compiler/xla/xla.py | 503 | function | Validates non-flat outputs and adds back device assignments.
Args:
outputs: Output from `computation` inside `xla.compile`.
Returns:
Tensors extracted from outputs and an empty list because Operations are not
allowed in non-flat outputs.. |
1675 | 1673 | _disable_summary_context | tensorflow/tensorflow/python/compiler/xla/xla.py | 539 | function | Enters a context where all summary ops are skipped.
Summaries are not yet supported in xla.compile(). So we provide this context
manager that can skip creating summary ops. This is a temporary workaround due
to XLA not supporting summary ops.
Yields:
None. |
1676 | 1674 | _CapturedObject | tensorflow/tensorflow/python/compiler/xla/xla.py | 558 | class | A placeholder to capture an object. |
1677 | 1675 | _get_scaffold | tensorflow/tensorflow/python/compiler/xla/xla.py | 576 | function | Retrieves the Scaffold from `captured_scaffold_fn`. |
1678 | 1676 | check_function_argument_count | tensorflow/tensorflow/python/compiler/xla/xla.py | 591 | function | Validate the number of input arguments to an XLA function.
Args:
func: the Python function that will be called to generate the body of an XLA
computation graph.
input_arity: the number of explicit arguments supplied by the caller.
infeed_queue: if not None, the infeed queue that will supply
additional arguments to the function.
Returns:
None if function can be called with the supplied number of
arguments, or an error string if it cannot. |
1679 | 1677 | XLACompileContextTest | tensorflow/tensorflow/python/compiler/xla/xla_test.py | 47 | class | |
1680 | 1678 | XlaCompileTest | tensorflow/tensorflow/python/compiler/xla/xla_test.py | 217 | class | |
1681 | 1679 | CheckFunctionArgumentCountTest | tensorflow/tensorflow/python/compiler/xla/xla_test.py | 260 | class | |
1682 | 1680 | BatchBenchmark | tensorflow/tensorflow/python/data/benchmarks/batch_benchmark.py | 27 | class | Benchmarks for `tf.data.Dataset.batch()`. |
1683 | 1681 | DatasetBenchmarkBase | tensorflow/tensorflow/python/data/benchmarks/benchmark_base.py | 31 | class | Base class for dataset benchmarks. |
1684 | 1682 | FilterBenchmark | tensorflow/tensorflow/python/data/benchmarks/filter_benchmark.py | 26 | class | Benchmarks for `tf.data.Dataset.filter()`. |
1685 | 1683 | SingleThreadedFlatMapDataset | tensorflow/tensorflow/python/data/benchmarks/from_tensor_slices_benchmark.py | 30 | class | A `Dataset` that maps a function over its input and flattens the result. |
1686 | 1684 | FromTensorSlicesBenchmark | tensorflow/tensorflow/python/data/benchmarks/from_tensor_slices_benchmark.py | 62 | class | Benchmarks for `tf.data.Dataset.from_tensor_slices()`. |
1687 | 1685 | ListFilesBenchmark | tensorflow/tensorflow/python/data/benchmarks/list_files_benchmark.py | 35 | class | Benchmarks for `tf.data.Dataset.list_files()`. |
1688 | 1686 | MapBenchmark | tensorflow/tensorflow/python/data/benchmarks/map_benchmark.py | 32 | class | Benchmarks for `tf.data.Dataset.map()`. |
1689 | 1687 | MetaBenchmark | tensorflow/tensorflow/python/data/benchmarks/meta_benchmark.py | 31 | class | Benchmark that compares various ways of running tf.data benchmarks. |
1690 | 1688 | PrefetchBenchmark | tensorflow/tensorflow/python/data/benchmarks/prefetch_benchmark.py | 24 | class | Benchmarks for `tf.data.Dataset.prefetch()`. |
1691 | 1689 | RangeBenchmark | tensorflow/tensorflow/python/data/benchmarks/range_benchmark.py | 24 | class | Benchmarks for `tf.data.Dataset.range()`. |
1692 | 1690 | AutotuneBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/autotune_benchmark.py | 31 | class | Benchmarks for autotuning performance knobs. |
1693 | 1691 | ChooseFastestBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/choose_fastest_benchmark.py | 31 | class | Benchmarks for static optimizations. |
1694 | 1692 | ChooseFastestBranchBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/choose_fastest_branch_benchmark.py | 26 | class | Benchmarks for ChooseFastestBranchDatast. |
1695 | 1693 | CsvDatasetBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/csv_dataset_benchmark.py | 38 | class | Benchmarks for `tf.data.experimental.CsvDataset`. |
1696 | 1694 | MapAndBatchBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/map_and_batch_benchmark.py | 40 | class | Benchmarks for `tf.data.experimental.map_and_batch()`. |
1697 | 1695 | MapDefunBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/map_defun_benchmark.py | 34 | class | Benchmarks for MapDefunOp. |
1698 | 1696 | _generate_csv_test_case | tensorflow/tensorflow/python/data/experimental/benchmarks/map_vectorization_benchmark.py | 37 | function | Generates a `decode_csv()` test case. |
1699 | 1697 | _generate_parse_single_example_test_case | tensorflow/tensorflow/python/data/experimental/benchmarks/map_vectorization_benchmark.py | 57 | function | Generates a `parse_single_example()` test case. |
1700 | 1698 | MapVectorizationBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/map_vectorization_benchmark.py | 97 | class | Benchmarks for the `MapVectorization` optimization. |
1701 | 1699 | MatchingFilesBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/matching_files_benchmark.py | 35 | class | Benchmark for the experimental `MatchingFilesDataset`. |
1702 | 1700 | OptimizationBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/optimize_benchmark.py | 32 | class | Benchmarks for static optimizations. |
1703 | 1701 | _make_fake_dataset_fn | tensorflow/tensorflow/python/data/experimental/benchmarks/parallel_interleave_benchmark.py | 36 | function | Returns a dataset that emulates a remote storage data source.
Returns a dataset factory which creates a dataset with 100 elements that
emulates the performance characteristic of a file-based dataset stored in a
remote storage. In particular, the first element will take an order of
magnitude longer to produce than the remaining elements (100ms vs. 1ms).
Args:
initial_delay_us: How long to wait before producing the first element.
remainder_delay_us: How long to wait before producing subsequent elements. |
1704 | 1702 | ParallelInterleaveBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/parallel_interleave_benchmark.py | 68 | class | Benchmarks for `tf.data.experimental.parallel_interleave()`. |
1705 | 1703 | _time_resampling | tensorflow/tensorflow/python/data/experimental/benchmarks/rejection_resample_benchmark.py | 31 | function | |
1706 | 1704 | RejectionResampleBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/rejection_resample_benchmark.py | 56 | class | Benchmarks for `tf.data.experimental.rejection_resample()`. |
1707 | 1705 | SnapshotDatasetBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/snapshot_dataset_benchmark.py | 34 | class | Benchmarks for `tf.data.experimental.snapshot()`. |
1708 | 1706 | UnbatchBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/unbatch_benchmark.py | 32 | class | Benchmarks for `tf.data.Dataset.unbatch()`. |
1709 | 1707 | AssertCardinalityTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/assert_cardinality_test.py | 30 | class | Tests for `tf.data.experimental.assert_cardinality()`. |
1710 | 1708 | AssertNextTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/assert_next_test.py | 30 | class | |
1711 | 1709 | chunk | tensorflow/tensorflow/python/data/experimental/kernel_tests/auto_shard_dataset_test.py | 46 | function | |
1712 | 1710 | AutoShardDatasetTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/auto_shard_dataset_test.py | 51 | class | |
1713 | 1711 | AutoShardTextLineDatasetTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/auto_shard_dataset_test.py | 509 | class | |
1714 | 1712 | _element_length_fn | tensorflow/tensorflow/python/data/experimental/kernel_tests/bucket_by_sequence_length_test.py | 37 | function | |
1715 | 1713 | _to_sparse_tensor | tensorflow/tensorflow/python/data/experimental/kernel_tests/bucket_by_sequence_length_test.py | 42 | function | |
1716 | 1714 | _format_record | tensorflow/tensorflow/python/data/experimental/kernel_tests/bucket_by_sequence_length_test.py | 46 | function | |
1717 | 1715 | _get_record_type | tensorflow/tensorflow/python/data/experimental/kernel_tests/bucket_by_sequence_length_test.py | 56 | function | |
1718 | 1716 | _get_record_shape | tensorflow/tensorflow/python/data/experimental/kernel_tests/bucket_by_sequence_length_test.py | 66 | function | |
1719 | 1717 | BucketBySequenceLengthTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/bucket_by_sequence_length_test.py | 76 | class | |
1720 | 1718 | _test_objects | tensorflow/tensorflow/python/data/experimental/kernel_tests/compression_ops_test.py | 31 | function | |
1721 | 1719 | CompressionOpsTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/compression_ops_test.py | 53 | class | |
1722 | 1720 | CopyToDeviceTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/copy_to_device_test.py | 40 | class | |
1723 | 1721 | CounterTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/counter_test.py | 30 | class | |
1724 | 1722 | CsvDatasetTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/csv_dataset_test.py | 40 | class | |
1725 | 1723 | _make_scalar_ds | tensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_ragged_batch_test.py | 38 | function | Create a test dataset with scalar elements. |
1726 | 1724 | _make_vector_ds | tensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_ragged_batch_test.py | 43 | function | Create a test dataset with vector elements (of varying size). |
1727 | 1725 | _make_matrix_ds1 | tensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_ragged_batch_test.py | 48 | function | Create a test dataset with matrix elements (of varying size). |
1728 | 1726 | _make_matrix_ds2 | tensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_ragged_batch_test.py | 53 | function | Create a test dataset with matrix elements (of varying size). |
1729 | 1727 | _make_matrix_ds_fully_defined | tensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_ragged_batch_test.py | 58 | function | Create a test dataset with matrix elements (of varying size). |
1730 | 1728 | _make_5dtensor_ds | tensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_ragged_batch_test.py | 63 | function | Create a test dataset with matrix elements (of varying size). |
1731 | 1729 | _make_ragged_ds | tensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_ragged_batch_test.py | 69 | function | Create a test dataset with RaggedTensor elements (of varying size). |
1732 | 1730 | _make_dict_ds | tensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_ragged_batch_test.py | 76 | function | Create a test set with various element shapes. |
1733 | 1731 | _make_tuple_ds | tensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_ragged_batch_test.py | 89 | function | Create a test set with various element shapes. |
1734 | 1732 | _to_list | tensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_ragged_batch_test.py | 98 | function | |
1735 | 1733 | RaggedBatchTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_ragged_batch_test.py | 102 | class | |
1736 | 1734 | DenseToSparseBatchTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/dense_to_sparse_batch_test.py | 32 | class | |
1737 | 1735 | DirectedInterleaveDatasetTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/directed_interleave_dataset_test.py | 34 | class | |
1738 | 1736 | GetSingleElementTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/get_single_element_test.py | 34 | class | |
1739 | 1737 | GroupByReducerTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/group_by_reducer_test.py | 37 | class | |
1740 | 1738 | GroupByWindowTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/group_by_window_test.py | 41 | class | |
1741 | 1739 | IgnoreErrorsTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/ignore_errors_test.py | 40 | class | |
1742 | 1740 | IOTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/io_test.py | 32 | class | |
1743 | 1741 | MakeBatchedFeaturesDatasetTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/make_batched_features_dataset_test.py | 38 | class | |
1744 | 1742 | MakeCsvDatasetTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/make_csv_dataset_test.py | 38 | class | |
1745 | 1743 | MakeTFRecordDatasetTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/make_tf_record_dataset_test.py | 33 | class | |
1746 | 1744 | MapAndBatchTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/map_and_batch_test.py | 43 | class | |
1747 | 1745 | _test_combinations | tensorflow/tensorflow/python/data/experimental/kernel_tests/map_defun_op_test.py | 44 | function | |
1748 | 1746 | MapDefunTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/map_defun_op_test.py | 48 | class | |
1749 | 1747 | MatchingFilesDatasetTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/matching_files_test.py | 34 | class | |
1750 | 1748 | ModelDatasetTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/model_dataset_test.py | 30 | class | |
1751 | 1749 | NonSerializableTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/non_serializable_test.py | 29 | class | |
1752 | 1750 | _captured_refvar_test_combinations | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimize_dataset_test.py | 44 | function | |
1753 | 1751 | OptimizeDatasetTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimize_dataset_test.py | 106 | class | |
1754 | 1752 | OverrideThreadpoolTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/override_threadpool_test.py | 38 | class | |
1755 | 1753 | ParallelInterleaveTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/parallel_interleave_test.py | 42 | class | |
1756 | 1754 | ParseExampleDatasetTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/parse_example_dataset_test.py | 54 | class | |
1757 | 1755 | PrefetchToDeviceTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/prefetch_to_device_test.py | 37 | class | |
1758 | 1756 | PrefetchWithSlackTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/prefetch_with_slack_test.py | 33 | class | |
1759 | 1757 | RandomDatasetTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/random_dataset_test.py | 29 | class | |
1760 | 1758 | FixedLengthRecordDatasetTestBase | tensorflow/tensorflow/python/data/experimental/kernel_tests/reader_dataset_ops_test_base.py | 36 | class | Base class for setting up and testing FixedLengthRecordDataset. |
1761 | 1759 | MakeBatchedFeaturesDatasetTestBase | tensorflow/tensorflow/python/data/experimental/kernel_tests/reader_dataset_ops_test_base.py | 63 | class | Base class for setting up and testing `make_batched_features_dataset`. |
1762 | 1760 | TextLineDatasetTestBase | tensorflow/tensorflow/python/data/experimental/kernel_tests/reader_dataset_ops_test_base.py | 271 | class | Base class for setting up and testing TextLineDataset. |
1763 | 1761 | TFRecordDatasetTestBase | tensorflow/tensorflow/python/data/experimental/kernel_tests/reader_dataset_ops_test_base.py | 311 | class | Base class for setting up and testing TFRecordDataset. |
1764 | 1762 | BatchSizesForWorkerTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/rebatch_dataset_test.py | 35 | class | |
1765 | 1763 | _flat_shapes | tensorflow/tensorflow/python/data/experimental/kernel_tests/rebatch_dataset_test.py | 113 | function | |
1766 | 1764 | RebatchDatasetTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/rebatch_dataset_test.py | 120 | class | |
1767 | 1765 | LegacyRebatchDatasetTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/rebatch_dataset_test.py | 325 | class | |
1768 | 1766 | ComputeBatchSizeTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/rebatch_dataset_test.py | 500 | class | |
1769 | 1767 | RejectionResampleTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/rejection_resample_test.py | 36 | class | |
1770 | 1768 | LocalReplicateTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/replicate_test.py | 44 | class | |
1771 | 1769 | _get_server_def | tensorflow/tensorflow/python/data/experimental/kernel_tests/replicate_test.py | 225 | function | Returns a server def with a single job + multiple tasks. |
1772 | 1770 | EagerClusterReplicateTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/replicate_test.py | 245 | class | |
1773 | 1771 | GraphClusterReplicateTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/replicate_test.py | 327 | class | |
1774 | 1772 | ScanTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/scan_test.py | 46 | class | |
1775 | 1773 | ShuffleAndRepeatTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/shuffle_and_repeat_test.py | 32 | class | |
1776 | 1774 | SleepTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/sleep_test.py | 32 | class | |
1777 | 1775 | SnapshotDatasetTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/snapshot_test.py | 40 | class | |
1778 | 1776 | LegacySnapshotDatasetTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/snapshot_test.py | 318 | class | |
1779 | 1777 | SqlDatasetTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/sql_dataset_test.py | 32 | class | |
1780 | 1778 | SqlDatasetTestBase | tensorflow/tensorflow/python/data/experimental/kernel_tests/sql_dataset_test_base.py | 30 | class | Base class for setting up and testing SqlDataset. |
1781 | 1779 | StatsDatasetTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/stats_dataset_ops_test.py | 39 | class | |
1782 | 1780 | ThreadUtilizationStatsTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/stats_dataset_ops_test.py | 334 | class | |
1783 | 1781 | FeatureStatsDatasetTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/stats_dataset_ops_test.py | 399 | class | |
1784 | 1782 | StatsDatasetTestBase | tensorflow/tensorflow/python/data/experimental/kernel_tests/stats_dataset_test_base.py | 37 | class | Base class for testing statistics gathered in `StatsAggregator`. |
1785 | 1783 | _events_from_file | tensorflow/tensorflow/python/data/experimental/kernel_tests/stats_dataset_test_base.py | 311 | function | Returns all events in a single event file.
Args:
filepath: Path to the event file.
Returns:
A list of all tf.Event protos in the event file. |
1786 | 1784 | _events_from_logdir | tensorflow/tensorflow/python/data/experimental/kernel_tests/stats_dataset_test_base.py | 329 | function | Returns all events in the single eventfile in logdir.
Args:
logdir: The directory in which the single event file is sought.
Returns:
A list of all tf.Event protos from the single event file.
Raises:
AssertionError: If logdir does not contain exactly one file. |
1787 | 1785 | TakeWhileTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/take_while_test.py | 34 | class | |
1788 | 1786 | TFRecordWriterTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/tf_record_writer_test.py | 39 | class | |
1789 | 1787 | UniqueTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/unique_test.py | 32 | class | |
1790 | 1788 | VariantTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/variant_test.py | 28 | class | |
1791 | 1789 | WrapDatasetVariantTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/wrap_unwrap_test.py | 31 | class | |
1792 | 1790 | ChooseFastestBranchDatasetTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/choose_fastest_branch_dataset_test.py | 34 | class | |
1793 | 1791 | ChooseFastestDatasetTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/choose_fastest_dataset_test.py | 31 | class | |
1794 | 1792 | _test_combinations | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/filter_fusion_test.py | 34 | function | |
1795 | 1793 | FilterFusionTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/filter_fusion_test.py | 62 | class | |
1796 | 1794 | FilterWithRandomUniformFusionTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/filter_with_random_uniform_fusion_test.py | 30 | class | |
1797 | 1795 | GrapplerTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/grappler_test.py | 37 | class | |
1798 | 1796 | _test_combinations | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/hoist_random_uniform_test.py | 38 | function | |
1799 | 1797 | HoistRandomUniformTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/hoist_random_uniform_test.py | 68 | class | |
1800 | 1798 | InjectPrefetchTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/inject_prefetch_test.py | 29 | class | |
1801 | 1799 | LatencyAllEdgesTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/latency_all_edges_test.py | 31 | class | |
1802 | 1800 | MapAndBatchFusionTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_and_batch_fusion_test.py | 29 | class | |
1803 | 1801 | _test_combinations | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_and_filter_fusion_test.py | 34 | function | |
1804 | 1802 | MapAndFilterFusionTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_and_filter_fusion_test.py | 77 | class | |
1805 | 1803 | _test_combinations | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_fusion_test.py | 34 | function | |
1806 | 1804 | MapFusionTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_fusion_test.py | 66 | class | |
1807 | 1805 | _test_combinations | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_parallelization_test.py | 37 | function | |
1808 | 1806 | MapParallelizationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_parallelization_test.py | 58 | class | |
1809 | 1807 | _generate_test_combinations | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_vectorization_test.py | 51 | function | |
1810 | 1808 | _unary_bitwise_test_combinations | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_vectorization_test.py | 60 | function | |
1811 | 1809 | _unary_logical_test_combinations | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_vectorization_test.py | 65 | function | |
1812 | 1810 | _unary_complex_test_combinations | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_vectorization_test.py | 70 | function | |
1813 | 1811 | _unary_real_test_combinations | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_vectorization_test.py | 81 | function | |
1814 | 1812 | _binary_bitwise_test_combinations | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_vectorization_test.py | 135 | function | |
1815 | 1813 | _binary_logical_test_combinations | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_vectorization_test.py | 144 | function | |
1816 | 1814 | _binary_real_test_combinations | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_vectorization_test.py | 150 | function | |
1817 | 1815 | MapVectorizationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/map_vectorization_test.py | 192 | class | |
1818 | 1816 | _test_combinations | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/noop_elimination_test.py | 34 | function | |
1819 | 1817 | NoopEliminationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/noop_elimination_test.py | 90 | class | |
1820 | 1818 | ReorderDataDiscardingOpsTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/reorder_data_discarding_ops_test.py | 29 | class | |
1821 | 1819 | ShuffleAndRepeatFusionTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/optimization/shuffle_and_repeat_fusion_test.py | 30 | class | |
1822 | 1820 | AssertCardinalityDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/assert_cardinality_dataset_serialization_test.py | 30 | class | |
1823 | 1821 | AutoShardDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/auto_shard_dataset_serialization_test.py | 36 | class | |
1824 | 1822 | BatchDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/batch_dataset_serialization_test.py | 33 | class | |
1825 | 1823 | CacheDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/cache_dataset_serialization_test.py | 32 | class | |
1826 | 1824 | _test_combinations | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/checkpoint_input_pipeline_hook_test.py | 40 | function | |
1827 | 1825 | CheckpointInputPipelineHookTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/checkpoint_input_pipeline_hook_test.py | 44 | class | |
1828 | 1826 | ChooseFastestBranchDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/choose_fastest_branch_dataset_serialization_test.py | 33 | class | |
1829 | 1827 | ChooseFastestDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/choose_fastest_dataset_serialization_test.py | 30 | class | |
1830 | 1828 | ConcatenateDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/concatenate_dataset_serialization_test.py | 30 | class | |
1831 | 1829 | CsvDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/csv_dataset_serialization_test.py | 32 | class | |
1832 | 1830 | FromTensorsSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/dataset_constructor_serialization_test.py | 31 | class | |
1833 | 1831 | FromTensorSlicesSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/dataset_constructor_serialization_test.py | 49 | class | |
1834 | 1832 | FromSparseTensorSlicesSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/dataset_constructor_serialization_test.py | 71 | class | |
1835 | 1833 | remove_variants | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/dataset_serialization_test_base.py | 41 | function | Remove variants from a nest structure, so sess.run will execute. |
1836 | 1834 | DatasetSerializationTestBase | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/dataset_serialization_test_base.py | 55 | class | Base class for testing serializable datasets. |
1837 | 1835 | FilterDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/filter_dataset_serialization_test.py | 31 | class | |
1838 | 1836 | FixedLengthRecordDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/fixed_length_record_dataset_serialization_test.py | 30 | class | |
1839 | 1837 | FlatMapDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/flat_map_dataset_serialization_test.py | 38 | class | |
1840 | 1838 | GroupByReducerSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/group_by_reducer_serialization_test.py | 31 | class | |
1841 | 1839 | GroupByWindowSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/group_by_window_serialization_test.py | 31 | class | |
1842 | 1840 | IgnoreErrorsSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/ignore_errors_serialization_test.py | 31 | class | |
1843 | 1841 | InterleaveDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/interleave_dataset_serialization_test.py | 32 | class | |
1844 | 1842 | MapAndBatchDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/map_and_batch_dataset_serialization_test.py | 34 | class | |
1845 | 1843 | MapDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/map_dataset_serialization_test.py | 38 | class | |
1846 | 1844 | MatchingFilesDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/matching_files_dataset_serialization_test.py | 33 | class | |
1847 | 1845 | OptimizeDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/optimize_dataset_serialization_test.py | 30 | class | |
1848 | 1846 | PaddedBatchDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/padded_batch_dataset_serialization_test.py | 32 | class | |
1849 | 1847 | ParallelInterleaveDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/parallel_interleave_dataset_serialization_test.py | 33 | class | |
1850 | 1848 | ParallelMapDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/parallel_map_dataset_serialization_test.py | 37 | class | |
1851 | 1849 | ParseExampleDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/parse_example_dataset_serialization_test.py | 29 | class | |
1852 | 1850 | PrefetchDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/prefetch_dataset_serialization_test.py | 29 | class | |
1853 | 1851 | RangeDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/range_dataset_serialization_test.py | 38 | class | |
1854 | 1852 | LegacyRebatchDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/rebatch_dataset_serialization_test.py | 30 | class | |
1855 | 1853 | RebatchDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/rebatch_dataset_serialization_test.py | 46 | class | |
1856 | 1854 | SampleFromDatasetsSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/sample_from_datasets_serialization_test.py | 30 | class | |
1857 | 1855 | ScanDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/scan_dataset_serialization_test.py | 30 | class | |
1858 | 1856 | SkipDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/sequence_dataset_serialization_test.py | 30 | class | |
1859 | 1857 | TakeDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/sequence_dataset_serialization_test.py | 61 | class | |
1860 | 1858 | RepeatDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/sequence_dataset_serialization_test.py | 91 | class | |
1861 | 1859 | SerializationIntegrationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/serialization_integration_test.py | 33 | class | |
1862 | 1860 | ShardDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/shard_dataset_serialization_test.py | 29 | class | |
1863 | 1861 | ShuffleAndRepeatSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/shuffle_and_repeat_dataset_serialization_test.py | 30 | class | |
1864 | 1862 | ShuffleDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/shuffle_dataset_serialization_test.py | 32 | class | |
1865 | 1863 | SnapshotDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/snapshot_dataset_serialization_test.py | 33 | class | |
1866 | 1864 | LegacySnapshotDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/snapshot_dataset_serialization_test.py | 124 | class | |
1867 | 1865 | SqlDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/sql_dataset_serialization_test.py | 34 | class | |
1868 | 1866 | StatsDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/stats_dataset_serialization_test.py | 36 | class | |
1869 | 1867 | TakeWhileDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/take_while_dataset_serialization_test.py | 30 | class | |
1870 | 1868 | TextLineDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/textline_dataset_serialization_test.py | 30 | class | |
1871 | 1869 | TFRecordDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/tf_record_dataset_serialization_test.py | 34 | class | |
1872 | 1870 | UnbatchDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/unbatch_dataset_serialization_test.py | 30 | class | |
1873 | 1871 | UniqueDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/unique_dataset_serialization_test.py | 30 | class | |
1874 | 1872 | ZipDatasetSerializationTest | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/zip_dataset_serialization_test.py | 30 | class | |
1875 | 1873 | dense_to_ragged_batch | tensorflow/tensorflow/python/data/experimental/ops/batching.py | 36 | function | A transformation that batches ragged elements into `tf.RaggedTensor`s.
This transformation combines multiple consecutive elements of the input
dataset into a single element.
Like `tf.data.Dataset.batch`, the components of the resulting element will
have an additional outer dimension, which will be `batch_size` (or
`N % batch_size` for the last element if `batch_size` does not divide the
number of input elements `N` evenly and `drop_remainder` is `False`). If
your program depends on the batches having the same outer dimension, you
should set the `drop_remainder` argument to `True` to prevent the smaller
batch from being produced.
Unlike `tf.data.Dataset.batch`, the input elements to be batched may have
different shapes:
* If an input element is a `tf.Tensor` whose static `tf.TensorShape` is
fully defined, then it is batched as normal.
* If an input element is a `tf.Tensor` whose static `tf.TensorShape` contains
one or more axes with unknown size (i.e., `shape[i]=None`), then the output
will contain a `tf.RaggedTensor` that is ragged up to any of such
dimensions.
* If an input element is a `tf.RaggedTensor` or any other type, then it is
batched as normal.
Example:
>>> dataset = tf.data.Dataset.from_tensor_slices(np.arange(6))
>>> dataset = dataset.map(lambda x: tf.range(x))
>>> dataset.element_spec.shape
TensorShape([None])
>>> dataset = dataset.apply(
... tf.data.experimental.dense_to_ragged_batch(batch_size=2))
>>> for batch in dataset:
... print(batch)
<tf.RaggedTensor [[], [0]]>
<tf.RaggedTensor [[0, 1], [0, 1, 2]]>
<tf.RaggedTensor [[0, 1, 2, 3], [0, 1, 2, 3, 4]]>
Args:
batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of
consecutive elements of this dataset to combine in a single batch.
drop_remainder: (Optional.) A `tf.bool` scalar `tf.Tensor`, representing
whether the last batch should be dropped in the case it has fewer than
`batch_size` elements; the default behavior is not to drop the smaller
batch.
row_splits_dtype: The dtype that should be used for the `row_splits` of any
new ragged tensors. Existing `tf.RaggedTensor` elements do not have their
row_splits dtype changed.
Returns:
Dataset: A `Dataset`. |
1876 | 1874 | dense_to_sparse_batch | tensorflow/tensorflow/python/data/experimental/ops/batching.py | 102 | function | A transformation that batches ragged elements into `tf.sparse.SparseTensor`s.
Like `Dataset.padded_batch()`, this transformation combines multiple
consecutive elements of the dataset, which might have different
shapes, into a single element. The resulting element has three
components (`indices`, `values`, and `dense_shape`), which
comprise a `tf.sparse.SparseTensor` that represents the same data. The
`row_shape` represents the dense shape of each row in the
resulting `tf.sparse.SparseTensor`, to which the effective batch size is
prepended. For example:
```python
# NOTE: The following examples use `{ ... }` to represent the
# contents of a dataset.
a = { ['a', 'b', 'c'], ['a', 'b'], ['a', 'b', 'c', 'd'] }
a.apply(tf.data.experimental.dense_to_sparse_batch(
batch_size=2, row_shape=[6])) ==
{
([[0, 0], [0, 1], [0, 2], [1, 0], [1, 1]], # indices
['a', 'b', 'c', 'a', 'b'], # values
[2, 6]), # dense_shape
([[0, 0], [0, 1], [0, 2], [0, 3]],
['a', 'b', 'c', 'd'],
[1, 6])
}
```
Args:
batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of
consecutive elements of this dataset to combine in a single batch.
row_shape: A `tf.TensorShape` or `tf.int64` vector tensor-like object
representing the equivalent dense shape of a row in the resulting
`tf.sparse.SparseTensor`. Each element of this dataset must have the same
rank as `row_shape`, and must have size less than or equal to `row_shape`
in each dimension.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1877 | 1875 | map_and_batch_with_legacy_function | tensorflow/tensorflow/python/data/experimental/ops/batching.py | 153 | function | Fused implementation of `map` and `batch`.
NOTE: This is an escape hatch for existing uses of `map_and_batch` that do not
work with V2 functions. New uses are strongly discouraged and existing uses
should migrate to `map_and_batch` as this method will not be removed in V2.
Args:
map_func: A function mapping a nested structure of tensors to another
nested structure of tensors.
batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of
consecutive elements of this dataset to combine in a single batch.
num_parallel_batches: (Optional.) A `tf.int64` scalar `tf.Tensor`,
representing the number of batches to create in parallel. On one hand,
higher values can help mitigate the effect of stragglers. On the other
hand, higher values can increase contention if CPU is scarce.
drop_remainder: (Optional.) A `tf.bool` scalar `tf.Tensor`, representing
whether the last batch should be dropped in case its size is smaller than
desired; the default behavior is not to drop the smaller batch.
num_parallel_calls: (Optional.) A `tf.int32` scalar `tf.Tensor`,
representing the number of elements to process in parallel. If not
specified, `batch_size * num_parallel_batches` elements will be processed
in parallel. If the value `tf.data.experimental.AUTOTUNE` is used, then
the number of parallel calls is set dynamically based on available CPU.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`.
Raises:
ValueError: If both `num_parallel_batches` and `num_parallel_calls` are
specified. |
1878 | 1876 | map_and_batch | tensorflow/tensorflow/python/data/experimental/ops/batching.py | 213 | function | Fused implementation of `map` and `batch`.
Maps `map_func` across `batch_size` consecutive elements of this dataset
and then combines them into a batch. Functionally, it is equivalent to `map`
followed by `batch`. This API is temporary and deprecated since input pipeline
optimization now fuses consecutive `map` and `batch` operations automatically.
Args:
map_func: A function mapping a nested structure of tensors to another
nested structure of tensors.
batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of
consecutive elements of this dataset to combine in a single batch.
num_parallel_batches: (Optional.) A `tf.int64` scalar `tf.Tensor`,
representing the number of batches to create in parallel. On one hand,
higher values can help mitigate the effect of stragglers. On the other
hand, higher values can increase contention if CPU is scarce.
drop_remainder: (Optional.) A `tf.bool` scalar `tf.Tensor`, representing
whether the last batch should be dropped in case its size is smaller than
desired; the default behavior is not to drop the smaller batch.
num_parallel_calls: (Optional.) A `tf.int32` scalar `tf.Tensor`,
representing the number of elements to process in parallel. If not
specified, `batch_size * num_parallel_batches` elements will be processed
in parallel. If the value `tf.data.experimental.AUTOTUNE` is used, then
the number of parallel calls is set dynamically based on available CPU.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`.
Raises:
ValueError: If both `num_parallel_batches` and `num_parallel_calls` are
specified. |
1879 | 1877 | unbatch | tensorflow/tensorflow/python/data/experimental/ops/batching.py | 269 | function | Splits elements of a dataset into multiple elements on the batch dimension.
For example, if elements of the dataset are shaped `[B, a0, a1, ...]`,
where `B` may vary for each input element, then for each element in the
dataset, the unbatched dataset will contain `B` consecutive elements
of shape `[a0, a1, ...]`.
```python
# NOTE: The following example uses `{ ... }` to represent the contents
# of a dataset.
a = { ['a', 'b', 'c'], ['a', 'b'], ['a', 'b', 'c', 'd'] }
a.unbatch() == {
'a', 'b', 'c', 'a', 'b', 'a', 'b', 'c', 'd'}
```
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1880 | 1878 | _DenseToSparseBatchDataset | tensorflow/tensorflow/python/data/experimental/ops/batching.py | 297 | class | A `Dataset` that batches ragged dense elements into `tf.sparse.SparseTensor`s. |
1881 | 1879 | _MapAndBatchDataset | tensorflow/tensorflow/python/data/experimental/ops/batching.py | 327 | class | A `Dataset` that maps a function over a batch of elements. |
1882 | 1880 | _DenseToRaggedDataset | tensorflow/tensorflow/python/data/experimental/ops/batching.py | 380 | class | A `Dataset` that encodes dense inputs as ragged (w/ ragged_rank=0).
In particular:
* Any tf.Tensor elements with rank>0 are encoded as ragged tensors with
ragged_rank=0. This allows tensors with varying shape to be batched
together.
* Any other elements are left as-is. |
1883 | 1881 | cardinality | tensorflow/tensorflow/python/data/experimental/ops/cardinality.py | 38 | function | Returns the cardinality of `dataset`, if known.
The operation returns the cardinality of `dataset`. The operation may return
`tf.data.experimental.INFINITE_CARDINALITY` if `dataset` contains an infinite
number of elements or `tf.data.experimental.UNKNOWN_CARDINALITY` if the
analysis fails to determine the number of elements in `dataset` (e.g. when the
dataset source is a file).
>>> dataset = tf.data.Dataset.range(42)
>>> print(tf.data.experimental.cardinality(dataset).numpy())
42
>>> dataset = dataset.repeat()
>>> cardinality = tf.data.experimental.cardinality(dataset)
>>> print((cardinality == tf.data.experimental.INFINITE_CARDINALITY).numpy())
True
>>> dataset = dataset.filter(lambda x: True)
>>> cardinality = tf.data.experimental.cardinality(dataset)
>>> print((cardinality == tf.data.experimental.UNKNOWN_CARDINALITY).numpy())
True
Args:
dataset: A `tf.data.Dataset` for which to determine cardinality.
Returns:
A scalar `tf.int64` `Tensor` representing the cardinality of `dataset`. If
the cardinality is infinite or unknown, the operation returns the named
constant `INFINITE_CARDINALITY` and `UNKNOWN_CARDINALITY` respectively. |
1884 | 1882 | assert_cardinality | tensorflow/tensorflow/python/data/experimental/ops/cardinality.py | 72 | function | Asserts the cardinality of the input dataset.
NOTE: The following assumes that "examples.tfrecord" contains 42 records.
>>> dataset = tf.data.TFRecordDataset("examples.tfrecord")
>>> cardinality = tf.data.experimental.cardinality(dataset)
>>> print((cardinality == tf.data.experimental.UNKNOWN_CARDINALITY).numpy())
True
>>> dataset = dataset.apply(tf.data.experimental.assert_cardinality(42))
>>> print(tf.data.experimental.cardinality(dataset).numpy())
42
Args:
expected_cardinality: The expected cardinality of the input dataset.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`.
Raises:
FailedPreconditionError: The assertion is checked at runtime (when iterating
the dataset) and an error is raised if the actual and expected cardinality
differ. |
1885 | 1883 | _AssertCardinalityDataset | tensorflow/tensorflow/python/data/experimental/ops/cardinality.py | 103 | class | A `Dataset` that assert the cardinality of its input. |
1886 | 1884 | compress | tensorflow/tensorflow/python/data/experimental/ops/compression_ops.py | 24 | function | Compress a dataset element.
Args:
element: A nested structure of types supported by Tensorflow.
Returns:
A variant tensor representing the compressed element. This variant can be
passed to `uncompress` to get back the original element. |
1887 | 1885 | uncompress | tensorflow/tensorflow/python/data/experimental/ops/compression_ops.py | 39 | function | Uncompress a compressed dataset element.
Args:
element: A scalar variant tensor to uncompress. The element should have been
created by calling `compress`.
output_spec: A nested structure of `tf.TypeSpec` representing the type(s) of
the uncompressed element.
Returns:
The uncompressed element. |
1888 | 1886 | CounterV2 | tensorflow/tensorflow/python/data/experimental/ops/counter.py | 29 | function | Creates a `Dataset` that counts from `start` in steps of size `step`.
For example:
```python
Dataset.count() == [0, 1, 2, ...)
Dataset.count(2) == [2, 3, ...)
Dataset.count(2, 5) == [2, 7, 12, ...)
Dataset.count(0, -1) == [0, -1, -2, ...)
Dataset.count(10, -1) == [10, 9, ...)
```
Args:
start: (Optional.) The starting value for the counter. Defaults to 0.
step: (Optional.) The step size for the counter. Defaults to 1.
dtype: (Optional.) The data type for counter elements. Defaults to
`tf.int64`.
Returns:
A `Dataset` of scalar `dtype` elements. |
1889 | 1887 | CounterV1 | tensorflow/tensorflow/python/data/experimental/ops/counter.py | 59 | function | |
1890 | 1888 | ProcessingMode | tensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py | 36 | class | |
1891 | 1889 | _DataServiceDatasetV2 | tensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py | 49 | class | A `Dataset` that reads elements from the tf.data service. |
1892 | 1890 | _DataServiceDatasetV1 | tensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py | 124 | class | A `Dataset` that executes its input through the tf.data service. |
1893 | 1891 | _parse_service | tensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py | 148 | function | Parses a tf.data service string into a (protocol, address) tuple.
Args:
service: A string in the format "protocol://address".
Returns:
The parsed (protocol, address) tuple |
1894 | 1892 | _from_dataset_id | tensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py | 175 | function | Creates a dataset which reads data from the tf.data service.
This transformation is similar to `from_dataset_id`, but supports additional
parameters which we do not yet want to add to the public Python API.
Args:
processing_mode: A string specifying the policy for how data should be
processed by tf.data workers. Currently, the only supported value is
"parallel_epochs".
service: A string indicating how to connect to the tf.data service. The
string should be in the format "<protocol>://<address>", e.g.
"grpc://localhost:5000".
dataset_id: The id of the dataset to read from. This id is returned by
`register_dataset` when the dataset is registered with the tf.data
service.
element_spec: A nested structure of `tf.TypeSpec`s representing the type of
elements produced by the dataset. Use `tf.data.Dataset.element_spec` to
see the element spec for a given dataset.
job_name: (Optional.) The name of the job. This argument makes it possible
for multiple datasets to share the same job. The default behavior is that
the dataset creates anonymous, exclusively owned jobs.
max_outstanding_requests: (Optional.) A limit on how many elements may be
requested at the same time. You can use this option to control the amount
of memory used, since `distribute` won't use more than `element_size` *
`max_outstanding_requests` of memory.
task_refresh_interval_hint_ms: (Optional.) A hint for how often to query the
dispatcher for task changes.
Returns:
A `tf.data.Dataset` which reads from the tf.data service. |
1895 | 1893 | _distribute | tensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py | 248 | function | A transformation that moves dataset processing to the tf.data service.
This transformation is similar to `distribute`, but supports additional
parameters which we do not yet want to add to the public Python API.
Args:
processing_mode: A string specifying the policy for how data should be
processed by tf.data workers. Currently, the only supported value is
"parallel_epochs".
service: A string indicating how to connect to the tf.data service. The
string should be in the format "<protocol>://<address>", e.g.
"grpc://localhost:5000".
job_name: (Optional.) The name of the job. This argument makes it possible
for multiple datasets to share the same job. The default behavior is that
the dataset creates anonymous, exclusively owned jobs.
max_outstanding_requests: (Optional.) A limit on how many elements may be
requested at the same time. You can use this option to control the amount
of memory used, since `distribute` won't use more than `element_size` *
`max_outstanding_requests` of memory.
task_refresh_interval_hint_ms: (Optional.) A hint for how often to query the
dispatcher for task changes.
Returns:
Dataset: A `Dataset` of the elements produced by the data service. |
1896 | 1894 | distribute | tensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py | 295 | function | A transformation that moves dataset processing to the tf.data service.
When you iterate over a dataset containing the `distribute` transformation,
the tf.data service creates a "job" which produces data for the dataset
iteration.
The `processing_mode` argument controls what data is produced by a tf.data
service job. Currently, the only supported mode is "parallel_epochs".
processing_mode="parallel_epochs" means that multiple tf.data workers will
iterate through the dataset in parallel, each producing all elements of the
dataset. For example, if the dataset contains {0, 1, 2}, every tf.data worker
used for execution will produce {0, 1, 2}. If there are 3 workers, the job
will produce the elements {0, 0, 0, 1, 1, 1, 2, 2, 2} (though not necessarily
in that order). To account for this, it is recommended to randomly shuffle
your dataset, so that different tf.data workers will iterate through the
dataset in different orders.
In the future, there will be additional processing modes. For example,
a "one_epoch" mode which partitions the dataset across the tf.data
workers, so that the consumers see each element of the dataset only once.
```
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x*x)
dataset = dataset.apply(
tf.data.experimental.service.distribute("parallel_epochs",
"grpc://dataservice:5000"))
dataset = dataset.map(lambda x: x+1)
for element in dataset:
print(element) # prints { 1, 2, 5, 10, 17 }
```
In the above example, the first two lines (before the call to `distribute`)
will be executed on tf.data workers, and the elements provided over
RPC. The remaining transformations (after the call to `distribute`) will be
executed locally.
The `job_name` argument allows jobs to be shared across multiple
datasets. Instead of each dataset creating its own job, all
datasets with the same `job_name` will consume from the same job. A new job
will be created for each iteration of the dataset (with each repetition of
`Dataset.repeat` counting as a new iteration). Suppose two training workers
(in either a single client or multi-client setup) iterate over the below
dataset, and there is a single tf.data worker:
```
range5_dataset = tf.data.Dataset.range(5)
dataset = range5_dataset.apply(tf.data.experimental.service.distribute(
"parallel_epochs", "grpc://dataservice:5000", job_name="my_job_name"))
for iteration in range(3):
print(list(dataset))
```
The elements of each job will be split between the two processes, with
elements being consumed by the processes on a first-come first-served basis.
One possible result is that process 1 prints
```
[0, 2, 4]
[0, 1, 3]
[1]
```
and process 2 prints
```
[1, 3]
[2, 4]
[0, 2, 3, 4]
```
Job names must not be re-used across different training jobs within the
lifetime of the tf.data service. In general, the tf.data service is expected
to live for the duration of a single training job.
To use the tf.data service with multiple training jobs, make sure to use
different job names to avoid conflicts. For example, suppose a training job
calls `distribute` with `job_name="job"` and reads until end of input. If
another independent job connects to the same tf.data service and tries to read
from `job_name="job"`, it will immediately receive end of input, without
getting any data.
**Keras and Distribution Strategies**
The dataset produced by the `distribute` transformation can be passed to
Keras' `Model.fit` or Distribution Strategy's
`tf.distribute.Strategy.experimental_distribute_dataset` like any other
`tf.data.Dataset`. We recommend setting a `job_name` on the call to
`distribute` so that if there are multiple workers, they read data from the
same job. Note that the autosharding normally performed by
`experimental_distribute_dataset` will be disabled when setting a `job_name`,
since sharing the job already results in splitting data across the workers.
When using a shared job, data will be dynamically balanced across workers, so
that they reach end of input about the same time. This results in better
worker utilization than with autosharding, where each worker processes an
independent set of files, and some workers may run out of data earlier than
others.
Args:
processing_mode: A string specifying the policy for how data should be
processed by tf.data workers. Currently, the only supported value is
"parallel_epochs".
service: A string indicating how to connect to the tf.data service. The
string should be in the format "protocol://address", e.g.
"grpc://localhost:5000".
job_name: (Optional.) The name of the job. This argument makes it possible
for multiple datasets to share the same job. The default behavior is that
the dataset creates anonymous, exclusively owned jobs.
max_outstanding_requests: (Optional.) A limit on how many elements may be
requested at the same time. You can use this option to control the amount
of memory used, since `distribute` won't use more than `element_size` *
`max_outstanding_requests` of memory.
Returns:
Dataset: A `Dataset` of the elements produced by the data service. |
1897 | 1895 | register_dataset | tensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py | 424 | function | Registers a dataset with the tf.data service.
`register_dataset` registers a dataset with the tf.data service so that
datasets can be created later with
`tf.data.experimental.service.from_dataset_id`. This is useful when the
dataset
is registered by one process, then used in another process. When the same
process is both registering and reading from the dataset, it is simpler to use
`tf.data.experimental.service.distribute` instead.
If the dataset is already registered with the tf.data service,
`register_dataset` returns the already-registered dataset's id.
>>> dispatcher = tf.data.experimental.service.DispatchServer(port=0)
>>> dispatcher_address = dispatcher.target.split("://")[1]
>>> worker = tf.data.experimental.service.WorkerServer(
... port=0, dispatcher_address=dispatcher_address)
>>> dataset = tf.data.Dataset.range(10)
>>> dataset_id = tf.data.experimental.service.register_dataset(
... dispatcher.target, dataset)
>>> dataset = tf.data.experimental.service.from_dataset_id(
... processing_mode="parallel_epochs",
... service=dispatcher.target,
... dataset_id=dataset_id,
... element_spec=dataset.element_spec)
>>> print(list(dataset.as_numpy_iterator()))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Args:
service: A string indicating how to connect to the tf.data service. The
string should be in the format "protocol://address", e.g.
"grpc://localhost:5000".
dataset: A `tf.data.Dataset` to register with the tf.data service.
Returns:
A scalar int64 tensor of the registered dataset's id. |
1898 | 1896 | from_dataset_id | tensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py | 491 | function | Creates a dataset which reads data from the tf.data service.
This is useful when the dataset is registered by one process, then used in
another process. When the same process is both registering and reading from
the dataset, it is simpler to use `tf.data.experimental.service.distribute`
instead.
Before using `from_dataset_id`, the dataset must have been registered with the
tf.data service using `tf.data.experimental.service.register_dataset`.
`register_dataset` returns a dataset id for the registered dataset. That is
the `dataset_id` which should be passed to `from_dataset_id`.
The `element_spec` argument indicates the `tf.TypeSpec`s for the elements
produced by the dataset. Currently `element_spec` must be explicitly
specified, and match the dataset registered under `dataset_id`. `element_spec`
defaults to `None` so that in the future we can support automatically
discovering the `element_spec` by querying the tf.data service.
`tf.data.experimental.service.distribute` is a convenience method which
combines `register_dataset` and `from_dataset_id` into a dataset
transformation.
See the documentation for `tf.data.experimental.service.distribute` for more
detail about how `from_dataset_id` works.
>>> dispatcher = tf.data.experimental.service.DispatchServer(port=0)
>>> dispatcher_address = dispatcher.target.split("://")[1]
>>> worker = tf.data.experimental.service.WorkerServer(
... port=0, dispatcher_address=dispatcher_address)
>>> dataset = tf.data.Dataset.range(10)
>>> dataset_id = tf.data.experimental.service.register_dataset(
... dispatcher.target, dataset)
>>> dataset = tf.data.experimental.service.from_dataset_id(
... processing_mode="parallel_epochs",
... service=dispatcher.target,
... dataset_id=dataset_id,
... element_spec=dataset.element_spec)
>>> print(list(dataset.as_numpy_iterator()))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Args:
processing_mode: A string specifying the policy for how data should be
processed by tf.data workers. Currently, the only supported value is
"parallel_epochs".
service: A string indicating how to connect to the tf.data service. The
string should be in the format "protocol://address", e.g.
"grpc://localhost:5000".
dataset_id: The id of the dataset to read from. This id is returned by
`register_dataset` when the dataset is registered with the tf.data
service.
element_spec: A nested structure of `tf.TypeSpec`s representing the type of
elements produced by the dataset. Use `tf.data.Dataset.element_spec` to
see the element spec for a given dataset.
job_name: (Optional.) The name of the job. This argument makes it possible
for multiple datasets to share the same job. The default behavior is that
the dataset creates anonymous, exclusively owned jobs.
max_outstanding_requests: (Optional.) A limit on how many elements may be
requested at the same time. You can use this option to control the amount
of memory used, since `distribute` won't use more than `element_size` *
`max_outstanding_requests` of memory.
Returns:
A `tf.data.Dataset` which reads from the tf.data service. |
1899 | 1897 | _AutoShardDataset | tensorflow/tensorflow/python/data/experimental/ops/distribute.py | 34 | class | A `Dataset` that shards the `Dataset` automatically.
This dataset takes in an existing dataset and tries to automatically figure
out how to shard the dataset in a multi-worker scenario. Currently, it uses
Grappler to walk up the dataset graph until it finds a reader dataset (e.g.
CSVDataset, TFRecordDataset), then inserts a ShardDataset op before that node
so that each worker only sees some files.
Args:
num_workers: Total number of workers to shard this dataset across.
index: The current worker index (out of the total number of workers) this
dataset is for.
Raises:
NotFoundError: If we cannot find a suitable reader dataset to begin
automatically sharding the dataset. |
1900 | 1898 | _AutoShardDatasetV1 | tensorflow/tensorflow/python/data/experimental/ops/distribute.py | 71 | function | |
1901 | 1899 | _RebatchDataset | tensorflow/tensorflow/python/data/experimental/ops/distribute.py | 76 | class | A `Dataset` that rebatches elements from its input into new batch sizes.
`_RebatchDataset(input_dataset, batch_sizes)` is functionally equivalent to
`input_dataset.unbatch().batch(N)`, where the value of N cycles through the
`batch_sizes` input list. The elements produced by this dataset have the same
rank as the elements of the input dataset.
For example:
```python
ds = tf.data.Dataset.range(8)
ds = ds.batch(4)
ds = _RebatchDataset(ds, batch_sizes=[2, 1, 1])
for elem in ds:
print(elem)
>> [0, 1], [2], [3], [4, 5], [6], [7]
ds = tf.data.Dataset.range(16)
ds = ds.batch(4)
ds = _RebatchDataset(ds, batch_sizes=[6])
for elem in ds:
print(elem)
>> [0, 1, 2, 3, 4, 5], [6, 7, 8, 9, 10, 11], [12, 13, 14, 15]
``` |
1902 | 1900 | _LegacyRebatchDataset | tensorflow/tensorflow/python/data/experimental/ops/distribute.py | 209 | class | A `Dataset` that divides its input batches into `num_replicas` sub-batches.
For each batch in the input dataset, _LegacyRebatchDataset will produce
`num_replicas` smaller batches whose sizes add up to the original batch size.
For example:
```python
ds = tf.data.Dataset.range(8)
ds = ds.batch(4)
ds = _LegacyRebatchDataset(ds, num_replicas=3)
for elem in ds:
print(elem)
>> [0, 1], [2, 3], [], [4, 5], [6, 7], []
``` |
1903 | 1901 | _RemoteDataset | tensorflow/tensorflow/python/data/experimental/ops/distribute.py | 280 | class | Creates a dataset on a given `device` given a graph def. |
1904 | 1902 | replicate | tensorflow/tensorflow/python/data/experimental/ops/distribute.py | 294 | function | A transformation that replicates `dataset` onto a list of devices.
Args:
dataset: A `tf.data.Dataset` object.
devices: A list of devices to replicate the dataset on.
Returns:
A dictionary mapping device name to a dataset on that device. |
1905 | 1903 | batch_sizes_for_worker | tensorflow/tensorflow/python/data/experimental/ops/distribute.py | 328 | function | Determines how to rebatch a dataset for the given worker.
Given the global batch size, number of workers, number of replicas per worker,
and worker index, returns the correct batch sizes for rebatching a dataset
on worker `worker_index` of `num_workers`, such that each global step (across
all workers and replicas) will consume global_batch_size elements. The
returned value should be passed as the `batch_sizes` input parameter to
`tf.data.experimental.rebatch()`. The returned batch sizes meet the following
constraints:
Let G = global_batch_size, W = num_workers, R = num_replicas_per_worker
(A) for any worker, len(batch_sizes) = W * R
(B) for any worker, sum(batch_sizes) == G
(C) for any global step (i.e. R iterations on each worker), the sum of batches
consumed by replicas across all workers is G.
(D) any two batch sizes of any two replicas differs by at most one.
For example, suppose we have G = 7, W = 2, R = 2, and suppose we have two
files which each contain 7 elements:
```python
# WORKER 0
batch_sizes_0 = batch_sizes_for_worker(global_batch_size=global_batch_size,
num_workers=2,
num_replicas_per_worker=2,
worker_index=0)
print(batch_sizes_0)
>> [2, 2, 2, 1]
dataset_0 = tf.data.Dataset.from_tensor_slices(["file_a", "file_b"])
dataset_0 = dataset_0.shard(num_shards, index=0)
dataset_0 = dataset_0.batch(7)
dataset_0 = dataset_0.apply(tf.data.experimental.rebatch(batch_sizes_0))
for elem in dataset_0:
print(elem)
>> [[A0, A1], [A2, A3], [A4, A5], [A6]]
# WORKER 1
batch_sizes_1 = batch_sizes_for_worker(global_batch_size=global_batch_size,
num_workers=2,
num_replicas_per_worker=2,
worker_index=1)
print(batch_sizes_1)
>> [2, 1, 2, 2]
dataset_1 = tf.data.Dataset.from_tensor_slices(["file_a", "file_b"])
dataset_1 = dataset_1.shard(num_shards, index=1)
dataset_1 = dataset_1.batch(7)
dataset_1 = dataset_1.apply(tf.data.experimental.rebatch(batch_sizes_1))
for elem in dataset_1:
print(elem)
>> [[B0, B1], [B2], [B3, B4], [B5, B6]]
```
The above example will produce the following elements:
Step 1:
Worker 0 Replica 0: [A0, A1]
Worker 0 Replica 1: [A2, A3]
Worker 1 Replica 0: [B0, B1]
Worker 1 Replica 1: [B2]
Total batch size = 7
Step 2:
Worker 0 Replica 0: [A4, A5]
Worker 0 Replica 1: [A6]
Worker 1 Replica 0: [B3, B4]
Worker 1 Replica 1: [B5, B6]
Total batch size = 7
Args:
global_batch_size: A `tf.int64` scalar, representing the global batch size.
num_workers: An integer representing the number of workers the dataset will
be distributed across.
num_replicas_per_worker: An integer representing the number of replicas per
worker. All workers are assumed to have the same number of replicas.
worker_index: An integer index of the worker to be rebatched.
Returns:
A `tf.int64` vector, representing the batch sizes to rebatch the dataset
into. |
1906 | 1904 | compute_batch_size | tensorflow/tensorflow/python/data/experimental/ops/distribute.py | 436 | function | An operation that returns the batch size of the dataset.
This op tries to infer the batch size statically by walking up the dataset
tree from the final dataset node and returning the batch size of the first
batching dataset (such as from .batch() and .padded_batch()) that it
encounters. This differs from using the `element_spec` of a dataset in that it
does not account for partial batches.
This operation may fail if it encounters contradictory batch sizes (for
example, if the dataset is created by zipping together two datasets with
different batch sizes), if there are no explicit batching transformations, or
if there are operations downstream from the batching transformation that may
modify its batch size. In these cases, it returns a -1.
Args:
dataset: A `tf.data.Dataset` object.
Returns:
A `tf.int64` Tensor representing the batch size of the dataset sans partial
batches. If this cannot be inferred statically, the value of this tensor
will be -1. |
1907 | 1905 | AutoShardPolicy | tensorflow/tensorflow/python/data/experimental/ops/distribute_options.py | 27 | class | Represents the type of auto-sharding we enable.
Please see the DistributeOptions.auto_shard_policy documentation for more
information on each type of autosharding. |
1908 | 1906 | ExternalStatePolicy | tensorflow/tensorflow/python/data/experimental/ops/distribute_options.py | 39 | class | |
1909 | 1907 | DistributeOptions | tensorflow/tensorflow/python/data/experimental/ops/distribute_options.py | 46 | class | Represents options for distributed data processing.
You can set the distribution options of a dataset through the
`experimental_distribute` property of `tf.data.Options`; the property is
an instance of `tf.data.experimental.DistributeOptions`.
```python
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = AutoShardPolicy.OFF
dataset = dataset.with_options(options)
``` |
1910 | 1908 | enumerate_dataset | tensorflow/tensorflow/python/data/experimental/ops/enumerate_ops.py | 26 | function | A transformation that enumerates the elements of a dataset.
It is similar to python's `enumerate`.
For example:
```python
# NOTE: The following examples use `{ ... }` to represent the
# contents of a dataset.
a = { 1, 2, 3 }
b = { (7, 8), (9, 10) }
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a.apply(tf.data.experimental.enumerate_dataset(start=5))
=> { (5, 1), (6, 2), (7, 3) }
b.apply(tf.data.experimental.enumerate_dataset())
=> { (0, (7, 8)), (1, (9, 10)) }
```
Args:
start: A `tf.int64` scalar `tf.Tensor`, representing the start value for
enumeration.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1911 | 1909 | ignore_errors | tensorflow/tensorflow/python/data/experimental/ops/error_ops.py | 26 | function | Creates a `Dataset` from another `Dataset` and silently ignores any errors.
Use this transformation to produce a dataset that contains the same elements
as the input, but silently drops any elements that caused an error. For
example:
```python
dataset = tf.data.Dataset.from_tensor_slices([1., 2., 0., 4.])
# Computing `tf.debugging.check_numerics(1. / 0.)` will raise an
InvalidArgumentError.
dataset = dataset.map(lambda x: tf.debugging.check_numerics(1. / x, "error"))
# Using `ignore_errors()` will drop the element that causes an error.
dataset =
dataset.apply(tf.data.experimental.ignore_errors()) # ==> {1., 0.5, 0.2}
```
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1912 | 1910 | _IgnoreErrorsDataset | tensorflow/tensorflow/python/data/experimental/ops/error_ops.py | 56 | class | A `Dataset` that silently ignores errors when computing its input. |
1913 | 1911 | get_single_element | tensorflow/tensorflow/python/data/experimental/ops/get_single_element.py | 27 | function | Returns the single element in `dataset` as a nested structure of tensors.
This function enables you to use a `tf.data.Dataset` in a stateless
"tensor-in tensor-out" expression, without creating an iterator.
This can be useful when your preprocessing transformations are expressed
as a `Dataset`, and you want to use the transformation at serving time.
For example:
```python
def preprocessing_fn(input_str):
# ...
return image, label
input_batch = ... # input batch of BATCH_SIZE elements
dataset = (tf.data.Dataset.from_tensor_slices(input_batch)
.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
.batch(BATCH_SIZE))
image_batch, label_batch = tf.data.experimental.get_single_element(dataset)
```
Args:
dataset: A `tf.data.Dataset` object containing a single element.
Returns:
A nested structure of `tf.Tensor` objects, corresponding to the single
element of `dataset`.
Raises:
TypeError: if `dataset` is not a `tf.data.Dataset` object.
InvalidArgumentError (at runtime): if `dataset` does not contain exactly
one element. |
1914 | 1912 | group_by_reducer | tensorflow/tensorflow/python/data/experimental/ops/grouping.py | 38 | function | A transformation that groups elements and performs a reduction.
This transformation maps element of a dataset to a key using `key_func` and
groups the elements by key. The `reducer` is used to process each group; its
`init_func` is used to initialize state for each group when it is created, the
`reduce_func` is used to update the state every time an element is mapped to
the matching group, and the `finalize_func` is used to map the final state to
an output value.
Args:
key_func: A function mapping a nested structure of tensors
(having shapes and types defined by `self.output_shapes` and
`self.output_types`) to a scalar `tf.int64` tensor.
reducer: An instance of `Reducer`, which captures the reduction logic using
the `init_func`, `reduce_func`, and `finalize_func` functions.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1915 | 1913 | group_by_window | tensorflow/tensorflow/python/data/experimental/ops/grouping.py | 68 | function | A transformation that groups windows of elements by key and reduces them.
This transformation maps each consecutive element in a dataset to a key
using `key_func` and groups the elements by key. It then applies
`reduce_func` to at most `window_size_func(key)` elements matching the same
key. All except the final window for each key will contain
`window_size_func(key)` elements; the final window may be smaller.
You may provide either a constant `window_size` or a window size determined by
the key through `window_size_func`.
Args:
key_func: A function mapping a nested structure of tensors
(having shapes and types defined by `self.output_shapes` and
`self.output_types`) to a scalar `tf.int64` tensor.
reduce_func: A function mapping a key and a dataset of up to `window_size`
consecutive elements matching that key to another dataset.
window_size: A `tf.int64` scalar `tf.Tensor`, representing the number of
consecutive elements matching the same key to combine in a single
batch, which will be passed to `reduce_func`. Mutually exclusive with
`window_size_func`.
window_size_func: A function mapping a key to a `tf.int64` scalar
`tf.Tensor`, representing the number of consecutive elements matching
the same key to combine in a single batch, which will be passed to
`reduce_func`. Mutually exclusive with `window_size`.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`.
Raises:
ValueError: if neither or both of {`window_size`, `window_size_func`} are
passed. |
1916 | 1914 | bucket_by_sequence_length | tensorflow/tensorflow/python/data/experimental/ops/grouping.py | 128 | function | A transformation that buckets elements in a `Dataset` by length.
Elements of the `Dataset` are grouped together by length and then are padded
and batched.
This is useful for sequence tasks in which the elements have variable length.
Grouping together elements that have similar lengths reduces the total
fraction of padding in a batch which increases training step efficiency.
Args:
element_length_func: function from element in `Dataset` to `tf.int32`,
determines the length of the element, which will determine the bucket it
goes into.
bucket_boundaries: `list<int>`, upper length boundaries of the buckets.
bucket_batch_sizes: `list<int>`, batch size per bucket. Length should be
`len(bucket_boundaries) + 1`.
padded_shapes: Nested structure of `tf.TensorShape` to pass to
`tf.data.Dataset.padded_batch`. If not provided, will use
`dataset.output_shapes`, which will result in variable length dimensions
being padded out to the maximum length in each batch.
padding_values: Values to pad with, passed to
`tf.data.Dataset.padded_batch`. Defaults to padding with 0.
pad_to_bucket_boundary: bool, if `False`, will pad dimensions with unknown
size to maximum length in batch. If `True`, will pad dimensions with
unknown size to bucket boundary minus 1 (i.e., the maximum length in each
bucket), and caller must ensure that the source `Dataset` does not contain
any elements with length longer than `max(bucket_boundaries)`.
no_padding: `bool`, indicates whether to pad the batch features (features
need to be either of type `tf.sparse.SparseTensor` or of same shape).
drop_remainder: (Optional.) A `tf.bool` scalar `tf.Tensor`, representing
whether the last batch should be dropped in the case it has fewer than
`batch_size` elements; the default behavior is not to drop the smaller
batch.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`.
Raises:
ValueError: if `len(bucket_batch_sizes) != len(bucket_boundaries) + 1`. |
1917 | 1915 | _GroupByReducerDataset | tensorflow/tensorflow/python/data/experimental/ops/grouping.py | 247 | class | A `Dataset` that groups its input and performs a reduction. |
1918 | 1916 | _GroupByWindowDataset | tensorflow/tensorflow/python/data/experimental/ops/grouping.py | 370 | class | A `Dataset` that groups its input and performs a windowed reduction. |
1919 | 1917 | Reducer | tensorflow/tensorflow/python/data/experimental/ops/grouping.py | 443 | class | A reducer is used for reducing a set of elements.
A reducer is represented as a tuple of the three functions:
1) initialization function: key => initial state
2) reduce function: (old state, input) => new state
3) finalization function: state => result |
1920 | 1918 | parallel_interleave | tensorflow/tensorflow/python/data/experimental/ops/interleave_ops.py | 43 | function | A parallel version of the `Dataset.interleave()` transformation.
`parallel_interleave()` maps `map_func` across its input to produce nested
datasets, and outputs their elements interleaved. Unlike
`tf.data.Dataset.interleave`, it gets elements from `cycle_length` nested
datasets in parallel, which increases the throughput, especially in the
presence of stragglers. Furthermore, the `sloppy` argument can be used to
improve performance, by relaxing the requirement that the outputs are produced
in a deterministic order, and allowing the implementation to skip over nested
datasets whose elements are not readily available when requested.
Example usage:
```python
# Preprocess 4 files concurrently.
filenames = tf.data.Dataset.list_files("/path/to/data/train*.tfrecords")
dataset = filenames.apply(
tf.data.experimental.parallel_interleave(
lambda filename: tf.data.TFRecordDataset(filename),
cycle_length=4))
```
WARNING: If `sloppy` is `True`, the order of produced elements is not
deterministic.
Args:
map_func: A function mapping a nested structure of tensors to a `Dataset`.
cycle_length: The number of input `Dataset`s to interleave from in parallel.
block_length: The number of consecutive elements to pull from an input
`Dataset` before advancing to the next input `Dataset`.
sloppy: A boolean controlling whether determinism should be traded for
performance by allowing elements to be produced out of order. If
`sloppy` is `None`, the `tf.data.Options.experimental_deterministic`
dataset option (`True` by default) is used to decide whether to enforce a
deterministic order.
buffer_output_elements: The number of elements each iterator being
interleaved should buffer (similar to the `.prefetch()` transformation for
each interleaved iterator).
prefetch_input_elements: The number of input elements to transform to
iterators before they are needed for interleaving.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1921 | 1919 | _DirectedInterleaveDataset | tensorflow/tensorflow/python/data/experimental/ops/interleave_ops.py | 104 | class | A substitute for `Dataset.interleave()` on a fixed list of datasets. |
1922 | 1920 | sample_from_datasets_v2 | tensorflow/tensorflow/python/data/experimental/ops/interleave_ops.py | 146 | function | Samples elements at random from the datasets in `datasets`.
Args:
datasets: A list of `tf.data.Dataset` objects with compatible structure.
weights: (Optional.) A list of `len(datasets)` floating-point values where
`weights[i]` represents the probability with which an element should be
sampled from `datasets[i]`, or a `tf.data.Dataset` object where each
element is such a list. Defaults to a uniform distribution across
`datasets`.
seed: (Optional.) A `tf.int64` scalar `tf.Tensor`, representing the
random seed that will be used to create the distribution. See
`tf.random.set_seed` for behavior.
Returns:
A dataset that interleaves elements from `datasets` at random, according to
`weights` if provided, otherwise with uniform probability.
Raises:
TypeError: If the `datasets` or `weights` arguments have the wrong type.
ValueError: If the `weights` argument is specified and does not match the
length of the `datasets` element. |
1923 | 1921 | sample_from_datasets_v1 | tensorflow/tensorflow/python/data/experimental/ops/interleave_ops.py | 230 | function | |
1924 | 1922 | choose_from_datasets_v2 | tensorflow/tensorflow/python/data/experimental/ops/interleave_ops.py | 237 | function | Creates a dataset that deterministically chooses elements from `datasets`.
For example, given the following datasets:
```python
datasets = [tf.data.Dataset.from_tensors("foo").repeat(),
tf.data.Dataset.from_tensors("bar").repeat(),
tf.data.Dataset.from_tensors("baz").repeat()]
# Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`.
choice_dataset = tf.data.Dataset.range(3).repeat(3)
result = tf.data.experimental.choose_from_datasets(datasets, choice_dataset)
```
The elements of `result` will be:
```
"foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz"
```
Args:
datasets: A list of `tf.data.Dataset` objects with compatible structure.
choice_dataset: A `tf.data.Dataset` of scalar `tf.int64` tensors between
`0` and `len(datasets) - 1`.
Returns:
A dataset that interleaves elements from `datasets` according to the values
of `choice_dataset`.
Raises:
TypeError: If the `datasets` or `choice_dataset` arguments have the wrong
type. |
1925 | 1923 | choose_from_datasets_v1 | tensorflow/tensorflow/python/data/experimental/ops/interleave_ops.py | 280 | function | |
1926 | 1924 | save | tensorflow/tensorflow/python/data/experimental/ops/io.py | 34 | function | Saves the content of the given dataset.
Example usage:
>>> import tempfile
>>> path = os.path.join(tempfile.gettempdir(), "saved_data")
>>> # Save a dataset
>>> dataset = tf.data.Dataset.range(2)
>>> tf.data.experimental.save(dataset, path)
>>> new_dataset = tf.data.experimental.load(path,
... tf.TensorSpec(shape=(), dtype=tf.int64))
>>> for elem in new_dataset:
... print(elem)
tf.Tensor(0, shape=(), dtype=int64)
tf.Tensor(1, shape=(), dtype=int64)
The saved dataset is saved in multiple file "shards". By default, the dataset
output is divided to shards in a round-robin fashion but custom sharding can
be specified via the `shard_func` function. For example, you can save the
dataset to using a single shard as follows:
```python
dataset = make_dataset()
def custom_shard_func(element):
return 0
dataset = tf.data.experimental.save(
path="/path/to/data", ..., shard_func=custom_shard_func)
```
NOTE: The directory layout and file format used for saving the dataset is
considered an implementation detail and may change. For this reason, datasets
saved through `tf.data.experimental.save` should only be consumed through
`tf.data.experimental.load`, which is guaranteed to be backwards compatible.
Args:
dataset: The dataset to save.
path: Required. A directory to use for saving the dataset.
compression: Optional. The algorithm to use to compress data when writing
it. Supported options are `GZIP` and `NONE`. Defaults to `NONE`.
shard_func: Optional. A function to control the mapping of dataset elements
to file shards. The function is expected to map elements of the input
dataset to int64 shard IDs. If present, the function will be traced and
executed as graph computation. |
1927 | 1925 | _LoadDataset | tensorflow/tensorflow/python/data/experimental/ops/io.py | 107 | class | A dataset that loads previously saved dataset. |
1928 | 1926 | load | tensorflow/tensorflow/python/data/experimental/ops/io.py | 146 | function | Loads a previously saved dataset.
Example usage:
>>> import tempfile
>>> path = os.path.join(tempfile.gettempdir(), "saved_data")
>>> # Save a dataset
>>> dataset = tf.data.Dataset.range(2)
>>> tf.data.experimental.save(dataset, path)
>>> new_dataset = tf.data.experimental.load(path,
... tf.TensorSpec(shape=(), dtype=tf.int64))
>>> for elem in new_dataset:
... print(elem)
tf.Tensor(0, shape=(), dtype=int64)
tf.Tensor(1, shape=(), dtype=int64)
Note that to load a previously saved dataset, you need to specify
`element_spec` -- a type signature of the elements of the saved dataset, which
can be obtained via `tf.data.Dataset.element_spec`. This requirement exists so
that shape inference of the loaded dataset does not need to perform I/O.
If the default option of sharding the saved dataset was used, the element
order of the saved dataset will be preserved when loading it.
The `reader_func` argument can be used to specify a custom order in which
elements should be loaded from the individual shards. The `reader_func` is
expected to take a single argument -- a dataset of datasets, each containing
elements of one of the shards -- and return a dataset of elements. For
example, the order of shards can be shuffled when loading them as follows:
```python
def custom_reader_func(datasets):
datasets = datasets.shuffle(NUM_SHARDS)
return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE)
dataset = tf.data.experimental.load(
path="/path/to/data", ..., reader_func=custom_reader_func)
```
Args:
path: Required. A path pointing to a previously saved dataset.
element_spec: Required. A nested structure of `tf.TypeSpec` objects matching
the structure of an element of the saved dataset and specifying the type
of individual element components.
compression: Optional. The algorithm to use to decompress the data when
reading it. Supported options are `GZIP` and `NONE`. Defaults to `NONE`.
reader_func: Optional. A function to control how to read data from shards.
If present, the function will be traced and executed as graph computation.
Returns:
A `tf.data.Dataset` instance. |
1929 | 1927 | _convert_external_state_policy_to_enum | tensorflow/tensorflow/python/data/experimental/ops/iterator_ops.py | 32 | function | |
1930 | 1928 | make_saveable_from_iterator | tensorflow/tensorflow/python/data/experimental/ops/iterator_ops.py | 49 | function | Returns a SaveableObject for saving/restoring iterator state using Saver.
Args:
iterator: Iterator.
external_state_policy: A string that identifies how to handle input
pipelines that depend on external state. Possible values are
'ignore': The external state is silently ignored.
'warn': The external state is ignored, logging a warning.
'fail': The operation fails upon encountering external state.
By default we set it to 'fail'.
Returns:
A SaveableObject for saving/restoring iterator state using Saver.
Raises:
ValueError: If iterator does not support checkpointing.
ValueError: If `external_state_policy` is not one of 'warn', 'ignore' or
'fail'.
For example:
```python
with tf.Graph().as_default():
ds = tf.data.Dataset.range(10)
iterator = ds.make_initializable_iterator()
# Build the iterator SaveableObject.
saveable_obj = tf.data.experimental.make_saveable_from_iterator(iterator)
# Add the SaveableObject to the SAVEABLE_OBJECTS collection so
# it can be automatically saved using Saver.
tf.compat.v1.add_to_collection(tf.GraphKeys.SAVEABLE_OBJECTS, saveable_obj)
saver = tf.compat.v1.train.Saver()
while continue_training:
... Perform training ...
if should_save_checkpoint:
saver.save()
```
Note: When restoring the iterator, the existing iterator state is completely
discarded. This means that any changes you may have made to the Dataset
graph will be discarded as well! This includes the new Dataset graph
that you may have built during validation. So, while running validation,
make sure to run the initializer for the validation input pipeline after
restoring the checkpoint.
Note: Not all iterators support checkpointing yet. Attempting to save the
state of an unsupported iterator will throw an error. |
1931 | 1929 | CheckpointInputPipelineHook | tensorflow/tensorflow/python/data/experimental/ops/iterator_ops.py | 106 | class | Checkpoints input pipeline state every N steps or seconds.
This hook saves the state of the iterators in the `Graph` so that when
training is resumed the input pipeline continues from where it left off.
This could potentially avoid overfitting in certain pipelines where the
number of training steps per eval are small compared to the dataset
size or if the training pipeline is pre-empted.
Differences from `CheckpointSaverHook`:
1. Saves only the input pipelines in the "iterators" collection and not the
global variables or other saveable objects.
2. Does not write the `GraphDef` and `MetaGraphDef` to the summary.
Example of checkpointing the training pipeline:
```python
est = tf.estimator.Estimator(model_fn)
while True:
est.train(
train_input_fn,
hooks=[tf.data.experimental.CheckpointInputPipelineHook(est)],
steps=train_steps_per_eval)
# Note: We do not pass the hook here.
metrics = est.evaluate(eval_input_fn)
if should_stop_the_training(metrics):
break
```
This hook should be used if the input pipeline state needs to be saved
separate from the model checkpoint. Doing so may be useful for a few reasons:
1. The input pipeline checkpoint may be large, if there are large shuffle
or prefetch buffers for instance, and may bloat the checkpoint size.
2. If the input pipeline is shared between training and validation, restoring
the checkpoint during validation may override the validation input
pipeline.
For saving the input pipeline checkpoint alongside the model weights use
`tf.data.experimental.make_saveable_from_iterator` directly to create a
`SaveableObject` and add to the `SAVEABLE_OBJECTS` collection. Note, however,
that you will need to be careful not to restore the training iterator during
eval. You can do that by not adding the iterator to the SAVEABLE_OBJECTS
collector when building the eval graph. |
1932 | 1930 | _CustomSaver | tensorflow/tensorflow/python/data/experimental/ops/iterator_ops.py | 297 | class | `Saver` with a different default `latest_filename`.
This is used in the `CheckpointInputPipelineHook` to avoid conflicts with
the model ckpt saved by the `CheckpointSaverHook`. |
1933 | 1931 | map_defun | tensorflow/tensorflow/python/data/experimental/ops/map_defun.py | 26 | function | Map a function on the list of tensors unpacked from `elems` on dimension 0.
Args:
fn: A function (`function.defun`) that takes a list of tensors and returns
another list of tensors. The output list has the same types as
output_dtypes. The elements of the output list have the same dimension 0
as `elems`, and the remaining dimensions correspond to those of
`fn_output_shapes`.
elems: A list of tensors.
output_dtypes: A list of dtypes corresponding to the output types of the
function.
output_shapes: A list of `TensorShape`s corresponding to the output shapes
from each invocation of the function on slices of inputs.
max_intra_op_parallelism: An integer. If positive, sets the max parallelism
limit of each function call to this.
Raises:
ValueError: if any of the inputs are malformed.
Returns:
A list of `Tensor` objects with the same types as `output_dtypes`. |
1934 | 1932 | MatchingFilesDataset | tensorflow/tensorflow/python/data/experimental/ops/matching_files.py | 28 | class | A `Dataset` that list the files according to the input patterns. |
1935 | 1933 | model | tensorflow/tensorflow/python/data/experimental/ops/optimization.py | 24 | function | A transformation that models performance.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1936 | 1934 | optimize | tensorflow/tensorflow/python/data/experimental/ops/optimization.py | 39 | function | A transformation that applies optimizations.
Args:
optimizations: (Optional.) A `tf.string` vector `tf.Tensor` identifying
optimizations to use. If not specified, the default set of optimizations
is applied.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1937 | 1935 | _ChooseFastestDataset | tensorflow/tensorflow/python/data/experimental/ops/optimization.py | 59 | class | A `Dataset` that merges two input datasets. |
1938 | 1936 | _ChooseFastestBranchDataset | tensorflow/tensorflow/python/data/experimental/ops/optimization.py | 106 | class | A `Dataset` that merges two input datasets. |
1939 | 1937 | _AutotuneAlgorithm | tensorflow/tensorflow/python/data/experimental/ops/optimization_options.py | 29 | class | Controls what algorithm is used in the autotune implementation. |
1940 | 1938 | MapVectorizationOptions | tensorflow/tensorflow/python/data/experimental/ops/optimization_options.py | 36 | class | Represents options for the MapVectorization optimization. |
1941 | 1939 | OptimizationOptions | tensorflow/tensorflow/python/data/experimental/ops/optimization_options.py | 70 | class | Represents options for dataset optimizations.
You can set the optimization options of a dataset through the
`experimental_optimization` property of `tf.data.Options`; the property is
an instance of `tf.data.experimental.OptimizationOptions`.
```python
options = tf.data.Options()
options.experimental_optimization.noop_elimination = True
options.experimental_optimization.map_vectorization.enabled = True
options.experimental_optimization.apply_default_optimizations = False
dataset = dataset.with_options(options)
``` |
1942 | 1940 | _ParseExampleDataset | tensorflow/tensorflow/python/data/experimental/ops/parsing_ops.py | 31 | class | A `Dataset` that parses `example` dataset into a `dict` dataset. |
1943 | 1941 | parse_example_dataset | tensorflow/tensorflow/python/data/experimental/ops/parsing_ops.py | 110 | function | A transformation that parses `Example` protos into a `dict` of tensors.
Parses a number of serialized `Example` protos given in `serialized`. We refer
to `serialized` as a batch with `batch_size` many entries of individual
`Example` protos.
This op parses serialized examples into a dictionary mapping keys to `Tensor`,
`SparseTensor`, and `RaggedTensor` objects. `features` is a dict from keys to
`VarLenFeature`, `RaggedFeature`, `SparseFeature`, and `FixedLenFeature`
objects. Each `VarLenFeature` and `SparseFeature` is mapped to a
`SparseTensor`; each `RaggedFeature` is mapped to a `RaggedTensor`; and each
`FixedLenFeature` is mapped to a `Tensor`. See `tf.io.parse_example` for more
details about feature dictionaries.
Args:
features: A `dict` mapping feature keys to `FixedLenFeature`,
`VarLenFeature`, `RaggedFeature`, and `SparseFeature` values.
num_parallel_calls: (Optional.) A `tf.int32` scalar `tf.Tensor`,
representing the number of parsing processes to call in parallel.
deterministic: (Optional.) A boolean controlling whether determinism
should be traded for performance by allowing elements to be produced out
of order if some parsing calls complete faster than others. If
`deterministic` is `None`, the
`tf.data.Options.experimental_deterministic` dataset option (`True` by
default) is used to decide whether to produce elements
deterministically.
Returns:
A dataset transformation function, which can be passed to
`tf.data.Dataset.apply`.
Raises:
ValueError: if features argument is None. |
1944 | 1942 | prefetch_to_device | tensorflow/tensorflow/python/data/experimental/ops/prefetching_ops.py | 37 | function | A transformation that prefetches dataset values to the given `device`.
NOTE: Although the transformation creates a `tf.data.Dataset`, the
transformation must be the final `Dataset` in the input pipeline.
Args:
device: A string. The name of a device to which elements will be prefetched.
buffer_size: (Optional.) The number of elements to buffer on `device`.
Defaults to an automatically chosen value.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1945 | 1943 | copy_to_device | tensorflow/tensorflow/python/data/experimental/ops/prefetching_ops.py | 60 | function | A transformation that copies dataset elements to the given `target_device`.
Args:
target_device: The name of a device to which elements will be copied.
source_device: The original device on which `input_dataset` will be placed.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1946 | 1944 | _CopyToDeviceDataset | tensorflow/tensorflow/python/data/experimental/ops/prefetching_ops.py | 86 | class | A `Dataset` that copies elements to another device. |
1947 | 1945 | _MapOnGpuDataset | tensorflow/tensorflow/python/data/experimental/ops/prefetching_ops.py | 228 | class | A `Dataset` that maps a function over elements in its using a GPU. |
1948 | 1946 | map_on_gpu | tensorflow/tensorflow/python/data/experimental/ops/prefetching_ops.py | 260 | function | Maps `map_func` across the elements of this dataset.
NOTE: This is a highly experimental version of `tf.data.Dataset.map` that runs
`map_func` on GPU. It must be used after applying the
`tf.data.experimental.copy_to_device` transformation with a GPU device
argument.
Args:
map_func: A function mapping a nested structure of tensors (having shapes
and types defined by `self.output_shapes` and `self.output_types`) to
another nested structure of tensors.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1949 | 1947 | RandomDatasetV2 | tensorflow/tensorflow/python/data/experimental/ops/random_ops.py | 32 | class | A `Dataset` of pseudorandom values. |
1950 | 1948 | RandomDatasetV1 | tensorflow/tensorflow/python/data/experimental/ops/random_ops.py | 48 | class | A `Dataset` of pseudorandom values. |
1951 | 1949 | _is_valid_int32 | tensorflow/tensorflow/python/data/experimental/ops/readers.py | 50 | function | |
1952 | 1950 | _is_valid_int64 | tensorflow/tensorflow/python/data/experimental/ops/readers.py | 59 | function | |
1953 | 1951 | _is_valid_float | tensorflow/tensorflow/python/data/experimental/ops/readers.py | 67 | function | |
1954 | 1952 | _infer_type | tensorflow/tensorflow/python/data/experimental/ops/readers.py | 74 | function | Given a string, infers its tensor type.
Infers the type of a value by picking the least 'permissive' type possible,
while still allowing the previous type inference for this column to be valid.
Args:
str_val: String value to infer the type of.
na_value: Additional string to recognize as a NA/NaN CSV value.
prev_type: Type previously inferred based on values of this column that
we've seen up till now.
Returns:
Inferred dtype. |
1955 | 1953 | _next_csv_row | tensorflow/tensorflow/python/data/experimental/ops/readers.py | 111 | function | Generator that yields rows of CSV file(s) in order. |
1956 | 1954 | _infer_column_defaults | tensorflow/tensorflow/python/data/experimental/ops/readers.py | 131 | function | Infers column types from the first N valid CSV records of files. |
1957 | 1955 | _infer_column_names | tensorflow/tensorflow/python/data/experimental/ops/readers.py | 158 | function | Infers column names from first rows of files. |
1958 | 1956 | _get_sorted_col_indices | tensorflow/tensorflow/python/data/experimental/ops/readers.py | 183 | function | Transforms select_columns argument into sorted column indices. |
1959 | 1957 | _maybe_shuffle_and_repeat | tensorflow/tensorflow/python/data/experimental/ops/readers.py | 213 | function | Optionally shuffle and repeat dataset, as requested. |
1960 | 1958 | make_tf_record_dataset | tensorflow/tensorflow/python/data/experimental/ops/readers.py | 223 | function | Reads and optionally parses TFRecord files into a dataset.
Provides common functionality such as batching, optional parsing, shuffling,
and performant defaults.
Args:
file_pattern: List of files or patterns of TFRecord file paths.
See `tf.io.gfile.glob` for pattern rules.
batch_size: An int representing the number of records to combine
in a single batch.
parser_fn: (Optional.) A function accepting string input to parse
and process the record contents. This function must map records
to components of a fixed shape, so they may be batched. By
default, uses the record contents unmodified.
num_epochs: (Optional.) An int specifying the number of times this
dataset is repeated. If None (the default), cycles through the
dataset forever.
shuffle: (Optional.) A bool that indicates whether the input
should be shuffled. Defaults to `True`.
shuffle_buffer_size: (Optional.) Buffer size to use for
shuffling. A large buffer size ensures better shuffling, but
increases memory usage and startup time.
shuffle_seed: (Optional.) Randomization seed to use for shuffling.
prefetch_buffer_size: (Optional.) An int specifying the number of
feature batches to prefetch for performance improvement.
Defaults to auto-tune. Set to 0 to disable prefetching.
num_parallel_reads: (Optional.) Number of threads used to read
records from files. By default or if set to a value >1, the
results will be interleaved. Defaults to `24`.
num_parallel_parser_calls: (Optional.) Number of parallel
records to parse in parallel. Defaults to `batch_size`.
drop_final_batch: (Optional.) Whether the last batch should be
dropped in case its size is smaller than `batch_size`; the
default behavior is not to drop the smaller batch.
Returns:
A dataset, where each element matches the output of `parser_fn`
except it will have an additional leading `batch-size` dimension,
or a `batch_size`-length 1-D tensor of strings if `parser_fn` is
unspecified. |