Specifying feature_columns in tf.keras.experimental.WideDeepModel

Hi, in TF1, we can specify the columns to be used by both wide and deep parts of the algorithm using the specified parameters:

combined_estimator = tf.estimator.DNNLinearCombinedEstimator(
    head=tf.estimator.BinaryClassHead(),
    # Wide settings
    linear_feature_columns=feature_columns,
    linear_optimizer=optimizer,
    # Deep settings
    dnn_feature_columns=feature_columns,
    dnn_hidden_units=[128],
    dnn_optimizer=optimizer)

In TF2, tf.estimator.DNNLinearCombinedEstimator was replaced by tf.keras.experimental.WideDeepModel, which only accepts 2 arguments: linear part of the model and deep part of the model, without the ability to specify what kind of features these different parts should use.

combined_model = tf.keras.experimental.WideDeepModel(linear_model, dnn_model)

How do I indicate that only a subset of original features (and some derived ones) should be used by wide part, and the rest of the features (including some derived/preprocessed ones) by the deep part?

Thanks

Hello, @Martynas_Venckus

In TensorFlow 2 (TF2), when using tf.keras.experimental.WideDeepModel, you can specify which features to use in the wide and deep parts of the model by defining separate feature columns for each part before constructing the models. Here’s a step-by-step guide on how to do it:

Define Feature Columns: Create feature columns for each part of the model. For the wide part, you might use basic numerical columns, bucketized columns, or categorical columns with identity or hashing. For the deep part, you can use embedding columns or crossed columns.

For the wide part

wide_feature_columns = [ … ] # Define your wide feature columns here

For the deep part

deep_feature_columns = [ … ] # Define your deep feature columns here

Build the Models: Construct the linear and DNN models separately using the feature columns defined above.

Build the linear model for the wide part

linear_model = tf.keras.experimental.LinearModel(feature_columns=wide_feature_columns)

Build the DNN model for the deep part

dnn_model = tf.keras.Sequential([
tf.keras.layers.DenseFeatures(deep_feature_columns),
tf.keras.layers.Dense(128, activation=‘relu’),
# Add more layers as needed
])

Combine into WideDeepModel: Pass the constructed linear and DNN models to tf.keras.experimental.WideDeepModel.

Combine them into a WideDeepModel

combined_model = tf.keras.experimental.WideDeepModel(linear_model, dnn_model)

Compile and Fit the Model: Compile the combined model with the desired optimizer, loss, and metrics, then fit it to your data.

Compile the model

combined_model.compile(optimizer=‘adam’, loss=‘binary_crossentropy’, metrics=[‘accuracy’])

Fit the model

combined_model.fit([wide_inputs, deep_inputs], labels, epochs=10, batch_size=32)

I hope this information is helpful for you.

Best Regard,
susowo_heusjod